Semantic representation learning for a mask-modulated lensless camera by contrastive cross-modal transferring

Ya Ti Chang Lee*, Chung Hao Tien

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

Lensless computational imaging, a technique that combines optical-modulated measurements with task-specific algorithms, has recently benefited from the application of artificial neural networks. Conventionally, lensless imaging techniques rely on prior knowledge to deal with the ill-posed nature of unstructured measurements, which requires costly supervised approaches. To address this issue, we present a self-supervised learning method that learns semantic representations for the modulated scenes from implicitly provided priors. A contrastive loss function is designed for training the target extractor (measurements) froma source extractor (structured natural scenes) to transfer cross-modal priors in the latent space. The effectiveness of the new extractor was validated by classifying the mask-modulated scenes on unseen datasets and showed the comparable accuracy to the source modality (contrastive language-image pre-trained [CLIP] network). The proposed multimodal representation learning method has the advantages of avoiding costly data annotation, being more adaptive to unseen data, and usability in a variety of downstream vision tasks with unconventional imaging settings.

Original languageEnglish
Pages (from-to)C24-C31
JournalApplied Optics
Volume63
Issue number8
DOIs
StatePublished - 10 Mar 2024

Fingerprint

Dive into the research topics of 'Semantic representation learning for a mask-modulated lensless camera by contrastive cross-modal transferring'. Together they form a unique fingerprint.

Cite this