Invariant Optical Representations for People, Places and Things
|Date||29.11.2021 – 14:15 › 15:15|
|Speaker||Dr. Achuta Kadambi, Assistant Professor of EECS at UCLA|
|Category||Conferences - Seminars|
Abstract: Real world scenes consist of people, places, and things that exhibit diverse visual appearance. Such diversity stems from the fundamental physics in how light interacts with matter. Appearance variations can mesmerize human beings, but puzzle artificial vision systems, which cannot generalize to such variation. To overcome this problem, my lab invents new computational cameras designed to be invariant to aspects of the physics of appearance. The first segment of the talk will discuss a computational imaging pipeline for imaging through adverse weather through the use of physics-based inductive biases. The second segment discusses optical invariance in health sensing representations. The remote plethysmograph will be discussed as a bleeding edge medical device that is just now obtaining clinically accurate heart rate; but due to its physical mechanism, does so only for lighter skin tones. We will discuss various bias mitigation techniques, ranging from camera design, to simulators, and finally new signal processing algorithms.
Bio: Achuta Kadambi received the PhD degree from MIT and joined UCLA where he is an Assistant Professor in Electrical Engineering and Computer Science. He has co-authored the textbook Computational Imaging (published by MIT Press and available as a free PDF at imagingtext.github.io). He has also received early career recognitions from NSF (CAREER), DARPA (Young Faculty Award), Army (Young Investigator Program), Forbes (30 under 30), and is also co-founder of Akasha Imaging (http://akasha.im).