A Hippocampal Model of Visually Guided Navigation as Implemented by a Mobile Agent - INRIA - Institut National de Recherche en Informatique et en Automatique Accéder directement au contenu
Communication Dans Un Congrès Année : 2000

A Hippocampal Model of Visually Guided Navigation as Implemented by a Mobile Agent

Résumé

Visually guided landmark navigation is based on space coding by hippocampal place cells (Pc)[2]. A biologically realistic architecture of cooperative-competitive associative networks (implemented as a control system for mobile agents) emulates place cell activity during local navigation (self and goal -localization, and route finding) in exploration and goal-retrieval paradigms. The system builds and stores panoramic views from landmarks and compares these views with current inputs. Mismatch induced low levels of recognition activity during exploration trigger a vigilance burst (by a mechanism inspired from septal modulation) which favors either the recognition of an alternative place category, or the creation of a new category. The sole implementation of visual ¿What¿ and ¿Where¿ information does not restrain the generality of the model since several modalities (proprioceptive and vestibular in particular) could cooperate to give rise to more robust place field spatial categories [5]. Providing the system with real visual inputs automatically extracted from a natural environment demonstrates that interspecies differences [6] in Pc coding (e.g. Pcs in rats vs. view cells in monkeys) result more from characteristics of the visual systems than from differences in Hs processing. Conversely, differences in Pc multiple codes within the same system result according to the model from different levels of processing and/or different degrees of multimodality. Each of these codes could be used within different navigational strategies a control system directly derived from the model allows a mobile agent to learn a few places in an environment, and their associated actions to perform in order to reach a goal. Generalization property of the model provides the capacity to join the goal from any place in the learned environment.
Fichier non déposé

Dates et versions

hal-00426243 , version 1 (23-10-2009)

Identifiants

  • HAL Id : hal-00426243 , version 1
  • PRODINRA : 248828

Citer

Jean-Paul Banquet, Yves Burnod, Philippe Gaussier, Arnaud Revel. A Hippocampal Model of Visually Guided Navigation as Implemented by a Mobile Agent. International Joint Conference on Neural Networks, 2000, France. pp.2041. ⟨hal-00426243⟩
138 Consultations
0 Téléchargements

Partager

Gmail Facebook X LinkedIn More