Deep Distributional Temporal Difference Learning for Game Playing
Document Type
Conference Proceeding
Publication Date
1-1-2021
Abstract
We compare classic scalar temporal difference learning with three new distributional algorithms for playing the game of 5-in-a-row using deep neural networks: distributional temporal difference learning with constant learning rate, and two distributional temporal difference algorithms with adaptive learning rate. All these algorithms are applicable to any two-player deterministic zero sum game and can probably be successfully generalized to other settings. All algorithms in our study performed well and developed strong strategies. The algorithms implementing the adaptive methods learned more quickly in the beginning, but in the long run, they were outperformed by the algorithms using constant learning rate which, without any prior knowledge, learned to play the game at a very high level after 200 000 games of self play.
Publication Source (Journal or Book title)
Studies in Computational Intelligence
First Page
192
Last Page
206
Recommended Citation
Berglind, F., Chen, J., & Sopasakis, A. (2021). Deep Distributional Temporal Difference Learning for Game Playing. Studies in Computational Intelligence, 949, 192-206. https://doi.org/10.1007/978-3-030-67148-8_14