BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Memento EPFL//
BEGIN:VEVENT
SUMMARY: Data-Efficient Learning in Autonomous Robots
DTSTART:20160422T141500
DTEND:20160422T151500
DTSTAMP:20260407T091122Z
UID:b5bf922c530895efb1b6c01777a1524d2e658931753ab753d222e761
CATEGORIES:Conferences - Seminars
DESCRIPTION:Marc Deisenroth\, ICL\nBio: Marc Deisenroth is a Lecturer in S
 tatistical Machine Learning at the Department of Computing\, Imperial Coll
 ege London. Prior to his appointment\, he was an Imperial College Research
  Fellow (09/2013–06/2015)\, Senior Research Scientist & Group Leader at 
 TU Darmstadt (12/2011–08/2013)\, and Research Associate at the Universit
 y of Washington and Intel Labs Seattle (02/2010–12/2011). Marc completed
  his PhD in 2009 with Carl Edward Rasmussen. Marc was Program Chair of EWR
 L 2012 and received a Best Paper Award at ICRA 2014. He is a recipient of 
 a Google Faculty Research Award and a Microsoft PhD Scholarship. Marc’s 
 research interests center around data-efficient machine learning methods (
 with a focus on Bayesian methods)\, with the objective to increase the lev
 el of autonomy in learning systems by modeling and accounting for uncertai
 nty in a principled way. Potential applications include personalized healt
 hcare\, autonomous robots and bio-chemical systems.\nFully autonomous syst
 ems and robots have been a vision for many decades\, but we are still far 
 from practical realization. One of the fundamental challenges in fully aut
 onomous systems and robots is learning from data directly without relying 
 on any kind of intricate human knowledge. This requires data-driven statis
 tical methods for modeling\, predicting\, and decision making\, while taki
 ng uncertainty into account\, e.g.\, due to measurement noise\, sparse dat
 a or stochasticity in the environment.\nIn my talk I will focus on machine
  learning methods for controlling autonomous robots\, which pose an additi
 onal practical challenge: Data-efficiency\, i.e.\, we need to be able to l
 earn controllers in a few experiments since performing millions of experim
 ents with robots is time consuming and wears out the hardware. To address 
 this problem\, current learning approaches typically require task-specific
  knowledge in form of expert demonstrations\, pre-shaped policies\, or the
  underlying dynamics.\nIn the first part of the talk\, I follow a differen
 t approach and speed up learning by efficiently extracting information fro
 m sparse data. In particular\, I propose to learn a probabilistic\, non-pa
 rametric Gaussian process dynamics model. By explicitly incorporating mode
 l uncertainty in long-term planning and controller learning my approach re
 duces the effects of model errors\, a key problem in model-based learning.
  Compared to state-of-the art reinforcement learning our model-based polic
 y search method achieves an unprecedented speed of learning\, which makes 
 is most promising for application to real systems. I demonstrate its appli
 cability to autonomous learning from scratch on real robot and control tas
 ks.\nIn the second part of my talk\, I will discuss an alternative method 
 for learning controllers for bipedal locomotion based on Bayesian Optimiza
 tion\, where it is hard to learn models of the underlying dynamics due to 
 ground contacts. Using Bayesian optimization\, we sidestep this modeling i
 ssue and directly optimize the controller parameters without the need of m
 odeling the robot's dynamics.
LOCATION:ME C2 405 http://plan.epfl.ch/?zoom=20&recenter_y=5864084.17342&r
 ecenter_x=730960.62257&layerNodes=fonds\,batiments\,labels\,information\,p
 arkings_publics\,arrets_metro\,transports_publics&floor=2&q=me_c2%20405
STATUS:CONFIRMED
END:VEVENT
END:VCALENDAR
