Sample-efficient, low-light image sensing through Eigentask Learning: Part 2 (Experiment)
ORAL
Abstract
Noise is unavoidable when extracting information from analog sensors, and is especially problematic when the signal to be sensed is weak. Given a weak signal and a noisy analog sensor, it is imperative to extract as much information as possible which, for inference purposes, is typically of a much lower dimension than the actual data sampled. In part I, we showed that a physical system can perform a certain set of transformations, termed eigentasks [1], which are robust to sampling and readout noise. In this part, we experimentally demonstrate the benefit of computing these eigentasks from sensor data in low-signal-to-noise-ratio conditions. We show that the eigentask basis creates a low-dimensional, noise-robust latent space that outperforms standard noise mitigation techniques such as principal component analysis and low-pass filtering across several low-light imaging tasks. To exhibit the universality of the eigentasks, we illustrate this performance enhancement across different optical image sensors. For low-light-machine-vision applications, extracting sensor information on an eigentask basis allows for a considerable reduction in the training requirements of the vision pipeline. In general, eigentasks seem aptly positioned to mitigate the effects of noise by optimally pre-processing sensor data, thus leading to the design of efficient sensing pipelines.
[1] Hu et al. Phys. Rev. X 13, 041020 (2023).
[1] Hu et al. Phys. Rev. X 13, 041020 (2023).
–
Presenters
-
Mandar Sohoni
Cornell University
Authors
-
Mandar Sohoni
Cornell University
-
Tianyang Chen
Princeton University
-
Saeed A Khan
Cornell University
-
Jeremie Laydevant
Cornell University
-
Shi-Yuan Ma
Cornell University
-
Tianyu Wang
Boston University
-
Hakan E Tureci
Princeton University
-
Peter L McMahon
Cornell University