Egocentric Spatial Processing in the Lateral Entorhinal Cortex

Daniel Shani, Peter Dayan

There is an abundance of data demonstrating the important role of the hippocampal formation (HF) in spatial navigation. The framework of cognitive maps has been proposed as encompassing roles for the HF in planning and abstract cognition. Despite excellent theoretical and experimental progress in understanding the role of the HF in the formation of a cognitive map, there remain gaps. Thus far, all current approaches include extensive consideration of the experimental data on the hippocampus (HPC) and one of its major inputs, the medial entorhinal cortex (MEC), but treat more superficially the lateral entorhinal cortex (LEC), which is the other major input. One main difference between the MEC and LEC seems to be allocentricity vs egocentricity, that is, MEC neurons show tuning for allocentric bearing of external items while LEC neurons code these items egocentrically. Furthermore, experimental data suggest that the LEC represents associations between objects and contexts.

To investigate the use of egocentric associative representations in spatial navigation, we build a reinforcement learning (RL) agent that can make use of egocentric representations in addition to allocentric representations for planning in a 2D environment. This approach differs from those that equate allocentric representations with map-based strategies and egocentric representations with taxon-like habits. Egocentric representations can be more useful for some problems, when a good policy ought to be defined relative to the self. Our agent learns to   navigate to reward by making use of both allocentric and egocentric successor representations. These are toy versions of the sort of predictive representations that we might see in the MEC and LEC respectively. The idea is that, while allocentric representations would remap in new environments, there could be repeated structure that could be exploited in the egocentric representation which could allow easy generalisation to new environments.

We find that the model learns future regarding allocentric and egocentric representations and is able to use these to learn additive allocentric and egocentric value functions, which represent global and local policies respectively. The egocentric value functions generalise to new environments and allow quicker learning.

We show how egocentric representations can be used in a map-like manner for flexible navigation and therefore provide a new perspective on the ways that different forms of representations are used in the creation of a cognitive map.

Go to Editor View