Empirical Risk Minimization with Relative Entropy Regularization

Samir M. Perlaza, Gaetan Bisson, Inaki Esnaola, Alain Jean-Marie, Stefano Rini

Research output: Contribution to journalArticlepeer-review

Abstract

The empirical risk minimization (ERM) problem with relative entropy regularization (ERM-RER) is investigated under the assumption that the reference measure is a σ-finite measure, and not necessarily a probability measure. Under this assumption, which leads to a generalization of the ERM-RER problem allowing a larger degree of flexibility for incorporating prior knowledge, numerous relevant properties are stated. Among these properties, the solution to this problem, if it exists, is shown to be a unique probability measure, mutually absolutely continuous with the reference measure. Such a solution exhibits a probably-approximately-correct guarantee for the ERM problem independently of whether the latter possesses a solution. For a fixed dataset and under a specific condition, the empirical risk is shown to be a sub-Gaussian random variable when the models are sampled from the solution to the ERM-RER problem. The generalization capabilities of the solution to the ERM-RER problem (the Gibbs algorithm) are studied via the sensitivity of the expected empirical risk to deviations from such a solution towards alternative probability measures. Finally, an interesting connection between sensitivity, generalization error, and lautum information is established.

Original languageEnglish
Pages (from-to)1
Number of pages1
JournalIEEE Transactions on Information Theory
DOIs
StateAccepted/In press - 2024

Keywords

  • Empirical Risk Minimization
  • Entropy
  • Generalization
  • Gibbs Algorithm
  • Gibbs Measure
  • Measurement uncertainty
  • PAC-Learning
  • Probability distribution
  • Q measurement
  • Random variables
  • Relative Entropy Regularization
  • Risk minimization
  • Sensitivity
  • Sensitivity
  • Supervised Learning

Fingerprint

Dive into the research topics of 'Empirical Risk Minimization with Relative Entropy Regularization'. Together they form a unique fingerprint.

Cite this