IQNet: Image Quality Assessment Guided Just Noticeable Difference Prefiltering for Versatile Video Coding

Yu Han Sun, Chiang Lo Hsuan Lee, Tian Sheuan Chang*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

Image prefiltering with just noticeable distortion (JND) improves coding efficiency in a visual lossless way by filtering the perceptually redundant information prior to compression. However, real JND cannot be well modeled with inaccurate masking equations in traditional approaches or image-level subject tests in deep learning approaches. Thus, this paper proposes a fine-grained JND prefiltering dataset guided by image quality assessment for accurate block-level JND modeling. The dataset is constructed from decoded images to include coding effects and is also perceptually enhanced with block overlap and edge preservation. Furthermore, based on this dataset, we propose a lightweight JND prefiltering network, IQNet, which can be applied directly to different quantization cases with the same model and only needs 3K parameters. The experimental results show that the proposed approach to Versatile Video Coding could yield maximum/average bitrate savings of 41%/15% and 53%/19% for all-intra and low-delay P configurations, respectively, with negligible subjective quality loss. Our method demonstrates higher perceptual quality and a model size that is an order of magnitude smaller than previous deep learning methods.

Original languageEnglish
Pages (from-to)17-27
Number of pages11
JournalIEEE Open Journal of Circuits and Systems
Volume5
DOIs
StatePublished - 2024

Keywords

  • just noticeable distortion
  • video coding
  • video quality assessment

Fingerprint

Dive into the research topics of 'IQNet: Image Quality Assessment Guided Just Noticeable Difference Prefiltering for Versatile Video Coding'. Together they form a unique fingerprint.

Cite this