Algorithms for Non-Stationary Generalized Linear Bandits - INRIA - Institut National de Recherche en Informatique et en Automatique Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2020

Algorithms for Non-Stationary Generalized Linear Bandits

Résumé

The statistical framework of Generalized Linear Models (GLM) can be applied to sequential problems involving categorical or ordinal rewards associated, for instance, with clicks, likes or ratings. In the example of binary rewards, logistic regression is well-known to be preferable to the use of standard linear modeling. Previous works have shown how to deal with GLMs in contextual online learning with bandit feedback when the environment is assumed to be stationary. In this paper, we relax this latter assumption and propose two upper confidence bound based algorithms that make use of either a sliding window or a discounted maximum-likelihood estimator. We provide theoretical guarantees on the behavior of these algorithms for general context sequences and in the presence of abrupt changes. These results take the form of high probability upper bounds for the dynamic regret that are of order d^2/3 G^1/3 T^2/3 , where d, T and G are respectively the dimension of the unknown parameter, the number of rounds and the number of breakpoints up to time T. The empirical performance of the algorithms is illustrated in simulated environments.
Fichier principal
Vignette du fichier
preprint.pdf (411.98 Ko) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-02514151 , version 1 (21-03-2020)

Identifiants

Citer

Yoan Russac, Olivier Cappé, Aurélien Garivier. Algorithms for Non-Stationary Generalized Linear Bandits. 2020. ⟨hal-02514151⟩
82 Consultations
165 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More