Zap Q-Learning for Optimal Stopping Time Problems - INRIA - Institut National de Recherche en Informatique et en Automatique Accéder directement au contenu
Pré-Publication, Document De Travail Année : 2019

Zap Q-Learning for Optimal Stopping Time Problems

Résumé

The objective in this paper is to obtain fast converging reinforcement learning algorithms to approximate solutions to the problem of discounted cost optimal stopping in an irreducible, uniformly ergodic Markov chain, evolving on a compact subset of $\mathbb{R}^n$. We build on the dynamic programming approach taken by Tsitsikilis and Van Roy, wherein they propose a Q-learning algorithm to estimate the optimal state-action value function, which then defines an optimal stopping rule. We provide insights as to why the convergence rate of this algorithm can be slow, and propose a fast-converging alternative, the "Zap-Q-learning" algorithm, designed to achieve optimal rate of convergence. For the first time, we prove the convergence of the Zap-Q-learning algorithm under the assumption of linear function approximation setting. We use ODE analysis for the proof, and the optimal asymptotic variance property of the algorithm is reflected via fast convergence in a finance example.

Dates et versions

hal-02425983 , version 1 (31-12-2019)

Identifiants

Citer

Shuhang Chen, Adithya M. Devraj, Ana Bušić, Sean P. Meyn. Zap Q-Learning for Optimal Stopping Time Problems. 2019. ⟨hal-02425983⟩
44 Consultations
0 Téléchargements

Altmetric

Partager

Gmail Facebook X LinkedIn More