Decontamination Transformer For Blind Image Inpainting

Chun Yi Li*, Yen Yu Lin, Wei Chen Chiu

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

2 Scopus citations

Abstract

Blind image inpainting aims at recovering the content from a corrupted image in which the mask indicating the corrupted regions is not available in inference time. Inspired that most existing methods for inpainting suffer from complex contamination, we propose a model that explicitly predicts the realvalued alpha mask and contaminant to eliminate the contamination from the corrupted image, thus improving the inpainting performance. To enhance the overall semantic consistency, the attention mechanism of transformers is exploited and integrated into our inpainting network. We conduct extensive experiments to verify our method against blind and non-blind inpainting models and demonstrate its effectiveness and generalizability to different sources of contaminant.

Original languageEnglish
Title of host publicationICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing, Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9781728163277
DOIs
StatePublished - 2023
Event48th IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2023 - Rhodes Island, Greece
Duration: 4 Jun 202310 Jun 2023

Publication series

NameICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
Volume2023-June
ISSN (Print)1520-6149

Conference

Conference48th IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2023
Country/TerritoryGreece
CityRhodes Island
Period4/06/2310/06/23

Keywords

  • Blind image inpainting
  • Transformer

Fingerprint

Dive into the research topics of 'Decontamination Transformer For Blind Image Inpainting'. Together they form a unique fingerprint.

Cite this