Abstract
Hippocampal place cells and interneurons in mammals have stable place fields and theta phase precession profiles that encode spatial environmental information. Hippocampal CA1 neurons can represent the animal's location and prospective information about the goal location. Reinforcement learning (RL) algorithms such as Q-learning have been used to build the navigation models. However, the traditional Q-learning (tQ-learning) limits the reward function once the animals arrive at the goal location, leading to unsatisfactory location accuracy and convergence rates. Therefore, we proposed a revised version of the Q-learning algorithm, dynamical Q-learning (dQ-learning), which assigns the reward function adaptively to improve the decoding performance. Firing rate was the input of the neural network of dQ-learning and was used to predict the movement direction. On the other hand, phase precession was the input of the reward function to update the weights of dQ-learning. Trajectory predictions using dQ-and tQ-learning were compared by the root mean squared error (RMSE) between the actual and predicted rat trajectories. Using dQ-learning, significantly higher prediction accuracy and faster convergence rate were obtained compared with tQ-learning in all cell types. Moreover, combining place cells and interneurons with theta phase precession improved the convergence rate and prediction accuracy. The proposed dQ-learning algorithm is a quick and more accurate method to perform trajectory reconstruction and prediction.
Original language | English |
---|---|
Article number | 2050048 |
Journal | International journal of neural systems |
Volume | 30 |
Issue number | 9 |
DOIs | |
State | Published - 1 Sep 2020 |
Keywords
- adaptive reward function
- dynamical Q-learning
- goal-direction navigation
- interneuron
- phase precession
- Place cell