AI-enhanced vision: seeing the invisible
Institute of Microengineering - Distinguished Lecture
Update as of 13 March 2020: Due to travel restrictions, we regret that the speaker will not be able to travel to EPFL. However, the lecture will be held remotely by zoom. No transmission will be organized to lecture halls, but participants can join remotely via the zoom link provided below.
Campus Lausanne: the talk will not be available in the lecture hall on EPFL Campus.
Campus Microcity: the talk will not be transmitted to a lecture hall in Microcity.
Please use the zoom link below to join remotely:
Zoom Live Stream: https://epfl.zoom.us/j/506874457
Abstract: If you point your camera to a scene, and the camera registers nothing—does it mean that nothing was really there? Hardly! The camera pixels measure “raw” light intensity where the encoded information often is much richer than a human observer could tell just by looking at the pixels on a screen. Which algorithms, then, should one apply to decode the raw intensity and reveal the hidden scene?
In this seminar, I will describe how to use Deep Neural Networks (DNNs), a form of Machine Learning (ML) algorithm, to perform this decoding. During the training stage of the DNN, physically generated objects are used to produce the encoded raw intensities. From these pairs of objects and raw intensities the DNN learns the association between the scenes and their encoded representations. After training, given a new scene, the DNN decodes it correctly to produce a final reconstructed image that is meaningful to a human observer.
With my research group, we applied this approach to three challenging instances of invisibility: transparent objects, also known as “phase objects,” whose raw intensities are highly rippled diffraction patterns; phase objects that are also very dark, i.e. the diffraction patterns are also highly attenuated; and objects hidden behind or surrounded by diffusers, e.g. frosted glass or multiple layers of glass patterned with sharp light-scattering features.
It is important to emphasize that in our work ML is not used in the traditional way to interpret the scenes; rather, it is used to form interpretable representations of scenes in situations where traditional ML would be helpless due to physical limitations in the optics. The cooperation of ML with physical models proved to be very powerful in this work and, beyond, is certain to impact many fundamental and applied aspects of physical and life sciences and engineering.
Bio: George Barbastathis received the Diploma in Electrical and Computer Engineering in 1993 from the National Technical University of Athens (Πολυτεχνείο) and the MSc and PhD degrees in Electrical Engineering in 1994 and 1997, respectively, from the California Institute of Technology (Caltech.) After post-doctoral work at the University of Illinois at Urbana-Champaign, he joined the faculty at MIT in 1999, where he is now Professor of Mechanical Engineering. He has worked or held visiting appointments at Harvard University, the Singapore-MIT Alliance for Research and Technology (SMART) Centre, the National University of Singapore, and the University of Michigan – Shanghai Jiao Tong University Joint Institute (密西根交大學院) in Shanghai, People’s Republic of China. His research interests are in machine learning and optimization for computational imaging and inverse problems; and optical system design, including artificial optical materials and interfaces. He is member of the Society for Photo Instrumentation Engineering (SPIE), the Institute of Electrical and Electronics Engineering (IEEE), and the American Society of Mechanical Engineers (ASME). In 2010 he was elected Fellow of the Optical Society of America (OSA) and in 2015 he was a recipient of China’s Top Foreign Scholar (“One Thousand Scholar”) Award.
Note: The Seminar Series is eligible for ECTS credits in the EDMI doctoral program
Note: After the lecture, there will be time for discussion and interaction with the distinguished speaker, sandwich lunch and refreshments sponsored by the Institute of Microengineering will be provided for attendees in front of the lecture hall (BM 5104, ca. 13h15)
- General public