Interpreting a Penalty as the Influence of a Bayesian Prior - INRIA - Institut National de Recherche en Informatique et en Automatique Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2020

Interpreting a Penalty as the Influence of a Bayesian Prior

Résumé

In machine learning, it is common to optimize the parameters of a probabilistic model, modulated by a somewhat ad hoc regularization term that penalizes some values of the parameters. Regularization terms appear naturally in Variational Inference (VI), a tractable way to approximate Bayesian posteriors: the loss to optimize contains a Kullback--Leibler divergence term between the approximate posterior and a Bayesian prior. We fully characterize which regularizers can arise this way, and provide a systematic way to compute the corresponding prior. This viewpoint also provides a prediction for useful values of the regularization factor in neural networks. We apply this framework to regularizers such as L1 or group-Lasso.
Fichier principal
Vignette du fichier
2002.00178.pdf (673.2 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-02466702 , version 1 (04-02-2020)

Identifiants

Citer

Pierre Wolinski, Guillaume Charpiat, Yann Ollivier. Interpreting a Penalty as the Influence of a Bayesian Prior. 2020. ⟨hal-02466702⟩
87 Consultations
157 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More