The AlphaZero algorithm has been shown to achieve superhuman levels of plays in chess, shogi, and Go. This paper presents analytic investigations of the algorithm on NoGo, a variant of Go that players cannot capture the opponents' stones. More specifically, lookup tables are employed for learning instead of deep neural networks, referred to as tabular AlphaZero. One goal of this work is to investigate how the algorithm is influenced by hyper-parameters. Another goal is to investigate whether the optimal plays and theoretical values can be learned. One of the hyper-parameters is thoroughly analyzed in the experiments. The results show that the tabular AlphaZero can learn the theoretical values and optimal plays in many settings of the hyper-parameter. Also, NoGo on different board sizes is compared, and the learning difficulty is shown to relate to the game complexity.