Trainable Diffractive Surfaces for Spectral Encoding of Spatial Information
ORAL
Abstract
We demonstrate a deep-learning based single-pixel optical machine vision framework, where multiple diffractive surfaces are used to transform and encode the spatial information of objects into the power spectrum of the diffracted light. Specifically, by predetermining a set of wavelengths each representing a data class, we trained diffractive surfaces to maximize the power of the diffracted wavelength corresponding to the correct data class to perform all-optical object classification through a single-pixel detector. Using a plasmonic nanoantenna-based spectroscopic detector and 3D-printed diffractive layers, we experimentally validated this design by successfully classifying handwritten-digits using a snap-shot broadband illumination. Further, we combined this all-optical spectral encoding scheme with a separately trained shallow artificial neural network to improve the inference accuracy through a feedback between the optical and electronic networks. The same electronic network was also used to reconstruct the images of the input objects solely based on the power of target spectral components, demonstrating the success of our framework as a resource-efficient, data-specific machine vision platform.
–
Presenters
-
Jingxi Li
University of California, Los Angeles
Authors
-
Jingxi Li
University of California, Los Angeles
-
Deniz Mengu
University of California, Los Angeles
-
Nezih Tolga Yardimci
University of California, Los Angeles, Electrical and Computer Engineering, University of California, Los Angeles
-
Yi Luo
University of California, Los Angeles
-
Xurong Li
University of California, Los Angeles
-
Muhammed Veli
University of California, Los Angeles
-
Yair Rivenson
University of California, Los Angeles
-
Mona Jarrahi
University of California, Los Angeles, Electrical and Computer Engineering, University of California, Los Angeles
-
Aydogan Ozcan
University of California, Los Angeles