BANet: A Blur-Aware Attention Network for Dynamic Scene Deblurring

Fu Jen Tsai, Yan Tsung Peng, Chung Chi Tsai, Yen Yu Lin, Chia Wen Lin*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

22 Scopus citations

Abstract

Image motion blur results from a combination of object motions and camera shakes, and such blurring effect is generally directional and non-uniform. Previous research attempted to solve non-uniform blurs using self-recurrent multi-scale, multi-patch, or multi-temporal architectures with self-attention to obtain decent results. However, using self-recurrent frameworks typically leads to a longer inference time, while inter-pixel or inter-channel self-attention may cause excessive memory usage. This paper proposes a Blur-aware Attention Network (BANet), that accomplishes accurate and efficient deblurring via a single forward pass. Our BANet utilizes region-based self-attention with multi-kernel strip pooling to disentangle blur patterns of different magnitudes and orientations and cascaded parallel dilated convolution to aggregate multi-scale content features. Extensive experimental results on the GoPro and RealBlur benchmarks demonstrate that the proposed BANet performs favorably against the state-of-the-arts in blurred image restoration and can provide deblurred results in real-time.

Original languageEnglish
Pages (from-to)6789-6799
Number of pages11
JournalIEEE Transactions on Image Processing
Volume31
DOIs
StatePublished - 2022

Keywords

  • blur-aware attention module
  • Image deblurring
  • region-wise pooling attention

Fingerprint

Dive into the research topics of 'BANet: A Blur-Aware Attention Network for Dynamic Scene Deblurring'. Together they form a unique fingerprint.

Cite this