Grid cells in the entorhinal cortex exhibit hexagonal spatial firing patterns, which are critical to mammalian navigation. The renaissance of deep learning evokes the study of grid pattern formation by training recurrent neural networks (RNN), while the underlying mechanism is still unclear. In this thesis, we aim to build connections between the RNN and classical model—continuous attractor neural networks (CANN). By simplifying the RNN architecture and comparing it with the CANN, we show that such models are unified from a band-pass filter perspective. Applying the theory, we manage to build a minimum model for grid pattern formation. On the experimental side, we first train the RNN with different settings to verify our claim. The error stabilization phenomenon and generalization failure are discovered in the RNN model. By a joystick visualization, we identify that both phenomena attribute to the persistent grid activities near the border regions, revealing the distinct boundary dynamics between the attractor and recurrent neural networks.
| Date of Award | 2021 |
|---|
| Original language | English |
|---|
| Awarding Institution | - The Hong Kong University of Science and Technology
|
|---|
| Supervisor | Bo LI (Supervisor) & Qifeng CHEN (Supervisor) |
|---|
Attractor and recurrent neural networks in grid pattern formation
LIU, Y. (Author). 2021
Student thesis: Master's thesis