BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Memento EPFL//
BEGIN:VEVENT
SUMMARY:From SLAM to Real-time Scene Understanding: 3D Dynamic Scene Graph
 s and Certifiable Perception Algorithms
DTSTART:20211005T161500
DTEND:20211005T171500
DTSTAMP:20260511T050417Z
UID:8826c1454e6016e5621c1dde524cd5109e5e79a4b3dbd7fe64d64db4
CATEGORIES:Conferences - Seminars
DESCRIPTION:Prof. Luca Carlone (MIT)\nAbstract: Spatial perception —the 
 robot’s ability to sense and understand the surrounding environment— i
 s a key enabler for autonomous systems operating in complex environments\,
  including self-driving cars and unmanned aerial vehicles. Recent advances
  in perception algorithms and systems have enabled robots to detect object
 s and create large-scale maps of an unknown environment\, which are crucia
 l capabilities for navigation\, manipulation\, and human-robot interaction
 . Despite these advances\, researchers and practitioners are well aware of
  the brittleness of existing perception systems\, and a large gap still se
 parates robot and human perception.\n\nThis talk discusses two efforts tar
 geted at bridging this gap. The first effort targets high-level understand
 ing. While humans are able to quickly grasp both geometric\, semantic\, an
 d physical aspects of a scene\, high-level scene understanding remains a c
 hallenge for robotics. I present our work on real-time metric-semantic und
 erstanding and 3D Dynamic Scene Graphs. I introduce the first generation o
 f Spatial Perception Engines\, that extend the traditional notions of mapp
 ing and SLAM\, and allow a robot to build a “mental model” of the envi
 ronment\, including spatial concepts (e.g.\, humans\, objects\, rooms\, bu
 ildings) and their relations at multiple levels of abstraction. The second
  effort focuses on robustness. I present recent advances in the design of 
 certifiable perception algorithms that are robust to extreme amounts of no
 ise and outliers and afford performance guarantees. I present fast certifi
 able algorithms for object pose estimation: our algorithms are “hard to 
 break” (e.g.\, are robust to 99% outliers) and succeed in localizing obj
 ects where an average human would fail. Moreover\, they come with a “con
 tract” that guarantees their input-output performance.\n\nSpeaker Bio: L
 uca Carlone is the Leonardo Career Development Associate Professor in the 
 Department of Aeronautics and Astronautics at the Massachusetts Institute 
 of Technology\, and a Principal Investigator in the Laboratory for Informa
 tion & Decision Systems (LIDS). He received his PhD from the Polytechnic U
 niversity of Turin in 2012. He joined LIDS as a postdoctoral associate (20
 15) and later as a Research Scientist (2016)\, after spending two years as
  a postdoctoral fellow at the Georgia Institute of Technology (2013-2015).
  His research interests include nonlinear estimation\, numerical and distr
 ibuted optimization\, and probabilistic inference\, applied to sensing\, p
 erception\, and decision-making in single and multi-robot systems. His wor
 k includes seminal results on certifiably correct algorithms for localizat
 ion and mapping\, as well as approaches for visual-inertial navigation and
  distributed mapping. He is a recipient of the Best Paper Award in Robot V
 ision at ICRA 2020\, a 2020 Honorable Mention from the IEEE Robotics and A
 utomation Letters\, a Track Best Paper award at the 2021 IEEE Aerospace Co
 nference\, the 2017 Transactions on Robotics King-Sun Fu Memorial Best Pap
 er Award\, the Best Paper Award at WAFR 2016\, the Best Student Paper Awar
 d at the 2018 Symposium on VLSI Circuits\, and he was best paper finalist 
 at RSS 2015 and RSS 2021. He is also a recipient of the NSF CAREER Award (
 2021)\, the RSS Early Career Award (2020)\, the Google Daydream (2019) and
  the Amazon Research Award (2020)\, and the MIT AeroAstro Vickie Kerrebroc
 k Faculty Award (2020). At MIT\, he teaches “Robotics: Science and Syste
 ms\,” the introduction to robotics for MIT undergraduates\, and he creat
 ed the graduate-level course “Visual Navigation for Autonomous Vehicles
 ”\, which covers mathematical foundations and fast C++ implementations o
 f spatial perception algorithms for drones and autonomous vehicles.\n 
LOCATION:https://epfl.zoom.us/j/3419340979
STATUS:CONFIRMED
END:VEVENT
END:VCALENDAR
