BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Memento EPFL//
BEGIN:VEVENT
SUMMARY:Q-learning the learning rate
DTSTART:20101217T111500
DTSTAMP:20260408T015717Z
UID:fd26e84ab4e25d611a244be043ee05edbfb449004cfca31ddc109780
CATEGORIES:Conferences - Seminars
DESCRIPTION:Kerstin Preuschoff\nIn reinforcement learning\, the learning r
 ate is a fundamental parameter that determines how past prediction errors 
 a ect future predictions. Traditionally\, the learning rate is kept consta
 nt\, yet behaviorally\, the learning rate is known to change both within a
 nd across contexts. Here\, we propose a model-free approach to setting the
  learning rate. Key to our proposal is that one think about the learning r
 ate as an action to be chosen to minimize a loss (prediction risk). This d
 ifferentiates our proposal from learning models where the learning rate is
  adapted in a mechanistic way or based on a model of the task at hand. We 
 use Q-learning to achieve prediction risk minimization and to simultaneous
 ly learn the prediction risk. Our algorithm produces learning rates that a
 re a function of both risk (uncertainty in stable environments) and volati
 lity (likelihood of changes in the environment). Learning rates decrease w
 ith risk and increase in volatility. The same functional dependence emerge
 s in model-based approaches to setting the learning rate. We discuss behav
 ioral evidence that these results parallel human and animal behavior. Imag
 ing of the dopaminergic system\, insula and anterior cingulate cortex of t
 he primate brain supports the premise behind our account\, namely\, that r
 eward learning is uncertainty-sensitive. 
LOCATION:BC 01 https://plan.epfl.ch/?room==BC%2001
STATUS:CONFIRMED
END:VEVENT
END:VCALENDAR
