BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Memento EPFL//
BEGIN:VEVENT
SUMMARY:BMI Distinguished Seminar // Nelson Spruston\; Learning produces a
  hippocampal cognitive map in the form of an orthogonalized state machine
DTSTART:20240306T160000
DTEND:20240306T170000
DTSTAMP:20260406T124149Z
UID:57cacf519acb1767b3bb5fc389238c8b529a9e971a81d23b1e23cde9
CATEGORIES:Conferences - Seminars
DESCRIPTION:Nelson Spruston\, Janelia\, Ashburn\, USA\nExceptionally at 16
 :00\n\nCognitive maps confer animals with flexible intelligence by represe
 nting spatial\, temporal\, and abstract relationships that can be used to 
 shape thought\, planning\, and behavior. Cognitive maps have been observed
  in the hippocampus\, but their algorithmic form and the processes by whic
 h they are learned remain obscure. Here\, we employed large-scale\, longit
 udinal two-photon calcium imaging to record activity from thousands of neu
 rons in the CA1 region of the hippocampus while mice learned to efficientl
 y collect rewards from two subtly different versions of linear tracks in v
 irtual reality. The results provide a detailed view of the formation of a 
 cognitive map in the hippocampus. Throughout learning\, both the animal be
 havior and hippocampal neural activity progressed through multiple interme
 diate stages\, gradually revealing improved task representation that mirro
 red improved behavioral efficiency. The learning process led to progressiv
 e decorrelations in initially similar hippocampal neural activity within a
 nd across tracks\, ultimately resulting in orthogonalized representations 
 resembling a state machine capturing the inherent structure of the task. W
 e show that a Hidden Markov Model (HMM) and a biologically plausible recur
 rent neural network trained using Hebbian learning can both capture core a
 spects of the learning dynamics and the orthogonalized representational st
 ructure in neural activity. In contrast\, we show that gradient-based lear
 ning of sequence models such as Long Short-Term Memory networks (LSTMs) an
 d Transformers do not naturally produce such orthogonalized representation
 s. We further demonstrate that mice exhibited adaptive behavior in novel t
 ask settings\, with neural activity reflecting flexible deployment of the 
 state machine. These findings shed light on the mathematical form of cogni
 tive maps\, the learning rules that sculpt them\, and the algorithms that 
 promote adaptive behavior in animals. The work thus charts a course toward
  a deeper understanding of biological intelligence and offers insights tow
 ard developing more robust learning algorithms in artificial intelligence.
 \n 
LOCATION:SV 1717 https://plan.epfl.ch/?room==SV%201717 https://epfl.zoom.u
 s/j/64813563657
STATUS:CONFIRMED
END:VEVENT
END:VCALENDAR
