Abstract
We study time-inhomogeneous episodic reinforcement learning (RL) under general function approximation and sparse rewards. We design a new algorithm, Variance-weighted Optimistic QLearning (VOQL), based on Q-learning and bound its regret assuming closure under Bellman backups, and bounded Eluder dimension for the regression function class. As a special case, VOQL achieves Oe(d√TH + d6H5) regret over T episodes for a horizon H MDP under (ddimensional) linear function approximation, which is asymptotically optimal. Our algorithm incorporates weighted regression-based upper and lower bounds on the optimal value function to obtain this improved regret. The algorithm is computationally efficient given a regression oracle over the function class, making this the first computationally tractable and statistically optimal approach for linear MDPs.
| Original language | English |
|---|---|
| Pages (from-to) | 987-1063 |
| Number of pages | 77 |
| Journal | Proceedings of Machine Learning Research |
| Volume | 195 |
| Publication status | Published - 2023 |
| Externally published | Yes |
| Event | 36th Annual Conference on Learning Theory, COLT 2023 - Bangalore, India Duration: 12 Jul 2023 → 15 Jul 2023 |
Bibliographical note
Publisher Copyright:© 2023 A. Agarwal, Y. Jin & T. Zhang.
Keywords
- Reinforcement learning
- eluder dimension
- model-free algorithms
- nonlinear function approximation