Visible and infrared image fusion
Event details
Date | 21.02.2019 |
Hour | 14:00 › 16:00 |
Speaker | Fayez Lahoud |
Location | |
Category | Conferences - Seminars |
EDIC candidacy exam
Exam president: Prof. Pascal Fua
Thesis advisor: Prof. Sabine Süsstrunk
Co-examiner: Prof. Wenzel Jacob
Abstract
Image fusion is the process of joining complementary information as well as common features from a set of images. The challenge in image fusion is to construct images that are appropriate, understandable and more informative to the viewer than any of the single source images. Common fusion methods combine image transforms of the sources into a single fused transform from which they reconstruct a fused image.
We propose to use pre-trained convolutional networks as feature extractors to compute fast and cleaner fusions in comparison with the current state-of-the-art. We look at three prior works to motivate the research. First, we present a generic pixel-level image fusion using image gradients as a decomposition scheme. Second, we explore visible and infrared image fusion with neural networks for pedestrian detection. Finally, we discuss a perceptual evaluation method to compare the quality of different image fusion schemes. Based on these works, we propose our research plan to fuse images based on features extracted from pre-trained networks.
Background papers
Perceptual evaluation of different image fusion schemes, by Alexande Toet, Eric M. Franken
Spectral Edge Image Fusion: Theory and Applications, by David Connah, Mark Samuel Drew, Graham David Finlayson
Fully Convolutional Region Proposal Networks for Multispectral Person Detection, by Daniel Konig, Michael Adam, Christian Javers, Georg Layher, Heiko Neumann, Michael Teutsch
Exam president: Prof. Pascal Fua
Thesis advisor: Prof. Sabine Süsstrunk
Co-examiner: Prof. Wenzel Jacob
Abstract
Image fusion is the process of joining complementary information as well as common features from a set of images. The challenge in image fusion is to construct images that are appropriate, understandable and more informative to the viewer than any of the single source images. Common fusion methods combine image transforms of the sources into a single fused transform from which they reconstruct a fused image.
We propose to use pre-trained convolutional networks as feature extractors to compute fast and cleaner fusions in comparison with the current state-of-the-art. We look at three prior works to motivate the research. First, we present a generic pixel-level image fusion using image gradients as a decomposition scheme. Second, we explore visible and infrared image fusion with neural networks for pedestrian detection. Finally, we discuss a perceptual evaluation method to compare the quality of different image fusion schemes. Based on these works, we propose our research plan to fuse images based on features extracted from pre-trained networks.
Background papers
Perceptual evaluation of different image fusion schemes, by Alexande Toet, Eric M. Franken
Spectral Edge Image Fusion: Theory and Applications, by David Connah, Mark Samuel Drew, Graham David Finlayson
Fully Convolutional Region Proposal Networks for Multispectral Person Detection, by Daniel Konig, Michael Adam, Christian Javers, Georg Layher, Heiko Neumann, Michael Teutsch
Practical information
- General public
- Free