Enhancement of Hippocampal Spatial Decoding Using a Dynamic Q-Learning Method with a Relative Reward Using Theta Phase Precession

Bo Wei Chen, Shih Hung Yang, Yu Chun Lo, Ching-Fu Wang, Han Lin Wang, Chen Yang Hsu, Yun Ting Kuo, Jung Chen Chen, Sheng Huang Lin, Han Chi Pan, Sheng Wei Lee, Xiao Yu, Boyi Qu, Chao Hung Kuo, You-Yin Chen*, Hsin Yi Lai


研究成果: Article同行評審

1 引文 斯高帕斯(Scopus)


Hippocampal place cells and interneurons in mammals have stable place fields and theta phase precession profiles that encode spatial environmental information. Hippocampal CA1 neurons can represent the animal's location and prospective information about the goal location. Reinforcement learning (RL) algorithms such as Q-learning have been used to build the navigation models. However, the traditional Q-learning (tQ-learning) limits the reward function once the animals arrive at the goal location, leading to unsatisfactory location accuracy and convergence rates. Therefore, we proposed a revised version of the Q-learning algorithm, dynamical Q-learning (dQ-learning), which assigns the reward function adaptively to improve the decoding performance. Firing rate was the input of the neural network of dQ-learning and was used to predict the movement direction. On the other hand, phase precession was the input of the reward function to update the weights of dQ-learning. Trajectory predictions using dQ-and tQ-learning were compared by the root mean squared error (RMSE) between the actual and predicted rat trajectories. Using dQ-learning, significantly higher prediction accuracy and faster convergence rate were obtained compared with tQ-learning in all cell types. Moreover, combining place cells and interneurons with theta phase precession improved the convergence rate and prediction accuracy. The proposed dQ-learning algorithm is a quick and more accurate method to perform trajectory reconstruction and prediction.

期刊International journal of neural systems
出版狀態Published - 1 9月 2020


深入研究「Enhancement of Hippocampal Spatial Decoding Using a Dynamic Q-Learning Method with a Relative Reward Using Theta Phase Precession」主題。共同形成了獨特的指紋。