Deep Distributional Temporal Difference Learning for Game Playing

Document Type

Conference Proceeding

Publication Date

1-1-2021

Abstract

We compare classic scalar temporal difference learning with three new distributional algorithms for playing the game of 5-in-a-row using deep neural networks: distributional temporal difference learning with constant learning rate, and two distributional temporal difference algorithms with adaptive learning rate. All these algorithms are applicable to any two-player deterministic zero sum game and can probably be successfully generalized to other settings. All algorithms in our study performed well and developed strong strategies. The algorithms implementing the adaptive methods learned more quickly in the beginning, but in the long run, they were outperformed by the algorithms using constant learning rate which, without any prior knowledge, learned to play the game at a very high level after 200 000 games of self play.

Publication Source (Journal or Book title)

Studies in Computational Intelligence

First Page

192

Last Page

206

This document is currently not available here.

Share

COinS