TY - GEN
T1 - Are AlphaZero-like Agents Robust to Adversarial Perturbations?
AU - Lan, Li Cheng
AU - Zhang, Huan
AU - Wu, Ti Rong
AU - Tsai, Meng Yu
AU - Wu, I. Chen
AU - Hsieh, Cho Jui
N1 - Publisher Copyright:
© 2022 Neural information processing systems foundation. All rights reserved.
PY - 2022
Y1 - 2022
N2 - The success of AlphaZero (AZ) has demonstrated that neural-network-based Go AIs can surpass human performance by a large margin. However, do these superhuman AZ agents truly learn some general basic knowledge that can be applied to any legal state? In this paper, we first extend the concept of adversarial examples to the game of Go: we generate perturbed states that are “semantically” equivalent to the original state by adding meaningless actions to the game, and an adversarial state is a perturbed state leading to an undoubtedly inferior action that is obvious even for amateur players. However, searching the adversarial state is challenging due to the large, discrete, and non-differentiable search space. To tackle this challenge, we develop the first adversarial attack on Go AIs that can efficiently search for adversarial states by strategically reducing the search space. This method can also be extended to other board games such as NoGo. Experimentally, we show that both Policy-Value neural network (PV-NN) and Monte Carlo tree search (MCTS) can be misled by adding one or two meaningless stones; for example, on 58% of the AlphaGo Zero self-play games, our method can make the widely used KataGo agent with 50 simulations of MCTS plays a losing action by adding two meaningless stones. We additionally evaluated the adversarial examples found by our algorithm with amateur human Go players, and 90% of examples indeed lead the Go agent to play an obviously inferior action. Our code is available at https://PaperCode.cc/GoAttack.
AB - The success of AlphaZero (AZ) has demonstrated that neural-network-based Go AIs can surpass human performance by a large margin. However, do these superhuman AZ agents truly learn some general basic knowledge that can be applied to any legal state? In this paper, we first extend the concept of adversarial examples to the game of Go: we generate perturbed states that are “semantically” equivalent to the original state by adding meaningless actions to the game, and an adversarial state is a perturbed state leading to an undoubtedly inferior action that is obvious even for amateur players. However, searching the adversarial state is challenging due to the large, discrete, and non-differentiable search space. To tackle this challenge, we develop the first adversarial attack on Go AIs that can efficiently search for adversarial states by strategically reducing the search space. This method can also be extended to other board games such as NoGo. Experimentally, we show that both Policy-Value neural network (PV-NN) and Monte Carlo tree search (MCTS) can be misled by adding one or two meaningless stones; for example, on 58% of the AlphaGo Zero self-play games, our method can make the widely used KataGo agent with 50 simulations of MCTS plays a losing action by adding two meaningless stones. We additionally evaluated the adversarial examples found by our algorithm with amateur human Go players, and 90% of examples indeed lead the Go agent to play an obviously inferior action. Our code is available at https://PaperCode.cc/GoAttack.
UR - http://www.scopus.com/inward/record.url?scp=85147949883&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85147949883
T3 - Advances in Neural Information Processing Systems
BT - Advances in Neural Information Processing Systems 35 - 36th Conference on Neural Information Processing Systems, NeurIPS 2022
A2 - Koyejo, S.
A2 - Mohamed, S.
A2 - Agarwal, A.
A2 - Belgrave, D.
A2 - Cho, K.
A2 - Oh, A.
PB - Neural information processing systems foundation
T2 - 36th Conference on Neural Information Processing Systems, NeurIPS 2022
Y2 - 28 November 2022 through 9 December 2022
ER -