Analyses of Tabular AlphaZero on Strongly-Solved Stochastic Games

Chu Hsuan Hsueh*, Kokolo Ikeda, I. Chen Wu, Jr Chang Chen, Tsan Sheng Hsu


研究成果: Article同行評審

1 引文 斯高帕斯(Scopus)


The AlphaZero algorithm achieved superhuman levels of play in chess, shogi, and Go by learning without domain-specific knowledge except for game rules. This paper targets stochastic games and investigates whether AlphaZero can learn theoretical values and optimal play. Since the theoretical values of stochastic games are expected win rates, not a simple win, loss, or draw, it is worth investigating the ability of AlphaZero to approximate expected win rates of positions. This paper also thoroughly studies how AlphaZero is influenced by hyper-parameters and some implementation details. The analyses are mainly based on AlphaZero learning with lookup tables. Deep neural networks (DNNs) like the ones in the original AlphaZero are also experimented and compared. The tested stochastic games include reduced and strongly-solved variants of Chinese dark chess and EinStein würfelt nicht!. The experiments showed that AlphaZero could learn policies that play almost optimally against the optimal player and could learn values accurately. In more detail, such good results were achieved by different hyper-parameter settings in a wide range, though it was observed that games on larger scales tended to have a little narrower range of proper hyper-parameters. In addition, the results of learning with DNNs were similar to lookup tables.

頁(從 - 到)18157-18182
期刊IEEE Access
出版狀態Published - 2023


深入研究「Analyses of Tabular AlphaZero on Strongly-Solved Stochastic Games」主題。共同形成了獨特的指紋。