Marco A. Wiering, Pierre Geurts, Gilles Louppe, Matthia Sabatelli - 2018
Paper Links: Full-Text
Publications: arXiv Add/Edit
We introduce a novel Deep Reinforcement Learning (DRL) algorithm called Deep Quality-Value (DQV) Learning. DQV uses temporal-difference learning to train a Value neural network and uses this network for training a second Quality-value network that learns to estimate state-action values. We first test DQV's update rules with Multilayer Perceptrons as function approximators on two classic RL problems, and then extend DQV with the use of Deep Convolutional Neural Networks, Experience Replay' and
Target Neural Networks' for tackling four games of the Atari Arcade Learning environment. Our results show that DQV learns significantly faster and better than Deep Q-Learning and Double Deep Q-Learning, suggesting that our algorithm can potentially be a better performing synchronous temporal difference algorithm than what is currently present in DRL.
Keywords: Add/Edit
Languages: Python Add/Edit
Libraries: Add/Edit
Official implementation of the paper "Approximating two value functions instead of one: towards characterizing a new family of Deep Reinforcement Learning Algorithms": https://arxiv.org/abs/1909.01779 To appear at the next NeurIPS2019 DRL-Workshop
Languages: TeX Add/Edit
Libraries: Add/Edit
DQV-Learning: a novel faster synchronous Deep Reinforcement Learning algorithm