Analyses of Tabular AlphaZero on Strongly-Solved Stochastic Games

Chu Hsuan Hsueh*, Kokolo Ikeda, I. Chen Wu, Jr Chang Chen, Tsan Sheng Hsu

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Scopus citations


The AlphaZero algorithm achieved superhuman levels of play in chess, shogi, and Go by learning without domain-specific knowledge except for game rules. This paper targets stochastic games and investigates whether AlphaZero can learn theoretical values and optimal play. Since the theoretical values of stochastic games are expected win rates, not a simple win, loss, or draw, it is worth investigating the ability of AlphaZero to approximate expected win rates of positions. This paper also thoroughly studies how AlphaZero is influenced by hyper-parameters and some implementation details. The analyses are mainly based on AlphaZero learning with lookup tables. Deep neural networks (DNNs) like the ones in the original AlphaZero are also experimented and compared. The tested stochastic games include reduced and strongly-solved variants of Chinese dark chess and EinStein würfelt nicht!. The experiments showed that AlphaZero could learn policies that play almost optimally against the optimal player and could learn values accurately. In more detail, such good results were achieved by different hyper-parameter settings in a wide range, though it was observed that games on larger scales tended to have a little narrower range of proper hyper-parameters. In addition, the results of learning with DNNs were similar to lookup tables.

Original languageEnglish
Pages (from-to)18157-18182
Number of pages26
JournalIEEE Access
StatePublished - 2023


  • AlphaZero
  • board games
  • Chinese dark chess
  • EinStein würfelt nicht!
  • reinforcement learning
  • stochastic games
  • tabular


Dive into the research topics of 'Analyses of Tabular AlphaZero on Strongly-Solved Stochastic Games'. Together they form a unique fingerprint.

Cite this