Hippocampal formation-inspired global self-localization: Quick recovery from the kidnapped robot problem from an egocentric perspective

Takeshi Nakashima1, Shunsuke Otake2, Akira Taniguchi1, Katsuyoshi Maeyama1, Lotfi El Hafi1, Tadahiro Taniguch1, Hiroshi Yamakawa3,4,5
1Ritsumeikan University, 2Osaka University, 3The Whole Brain Architecture Initiative, 3The University of Tokyo, 3RIKEN Center for Advanced Intelligence Project

Abstract

It remains difficult for mobile robots to continue accurate self-localization when they are suddenly teleported to a location that is different from their beliefs during navigation. By incorporating insights from neuroscience into developing a spatial cognition model for mobile robots, it may become possible to acquire the ability to respond appropriately to changing situations, similar to living organisms. Indeed, recent neuroscience research has shown that during teleportation in rat navigation, neural populations of place cells in the cornu ammonis-3 region of the hippocampus, which are sparse representations of each other, switch discretely. In this study, we construct a spatial cognition model using brain reference architecture-driven development, a method for developing brain-inspired software that is functionally and structurally consistent with the brain. The spatial cognition model was realized by integrating the recurrent state–space model, a world model, with Monte Carlo localization to infer allocentric self-positions within the framework of neuro-symbol emergence in the robotics toolkit. The spatial cognition model, which models the cornu ammonis-1 and -3 regions with each latent variable, demonstrated improved self-localization performance of mobile robots during teleportation in a simulation environment. Moreover, it was confirmed that sparse neural activity could be obtained for the latent variables corresponding to cornu ammonis-3. These results suggest that spatial cognition models that incorporate neuroscience insights can contribute to improving the self-localization technology for mobile robots.

Hippocampal-formation inspired spatial cognition model

image of model

(A) Region of interest of this study(Hippocampal-formation) and its brain information flow;
(B) Graphical model of RSSM without rewards; (C) MCL;
(D),(E) Our integreted spatial cognition model

Global localization

Brack arrow: Grand truce pose
Red arrow and ellipse: Pose with covariance estimated by model
Blue arrow and ellipse: Pose with covariance estimated by sub module(i.e., RSSM or MRSSM):
*Note that this video play at triple speed.

Learned representation(Environment2)

The representation obtained in the latent variable t_t (200 dimensions).
Model 2 on the left shows more sparse features than Model 1.

image of representations

BibTeX


      @ARTICLE{nakashima2024hf,
        AUTHOR={Nakashima, Takeshi  and Otake, Shunsuke  and Taniguchi, Akira  and Maeyama, Katsuyoshi  and El Hafi, Lotfi  and Taniguchi, Tadahiro  and Yamakawa, Hiroshi },
        TITLE={Hippocampal formation-inspired global self-localization: quick recovery from the kidnapped robot problem from an egocentric perspective},
        JOURNAL={Frontiers in Computational Neuroscience},
        VOLUME={18},
        YEAR={2024}
      }
    

Funding

This research was partially supported by JSPS KAKENHI Grants-in-Aid for Scientific Research, grant numbers JP22H05159 and JP23K16975