CMAF: Cross-Modal Augmentation via Fusion for Underwater Acoustic Image Recognition

Shih Wei Yang*, Li Hsiang Shen, Hong Han Shuai*, Kai Ten Feng*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Underwater image recognition is crucial for underwater detection applications. Fish classification has been one of the emerging research areas in recent years. Existing image classification models usually classify data collected from terrestrial environments. However, existing image classification models trained with terrestrial data are unsuitable for underwater images, as identifying underwater data is challenging due to their incomplete and noisy features. To address this, we propose a cross-modal augmentation via fusion (CMAF) framework for acoustic-based fish image classification. Our approach involves separating the process into two branches: visual modality and sonar signal modality, where the latter provides a complementary character feature. We augment the visual modality, design an attention-based fusion module, and adopt a masking-based training strategy with a mask-based focal loss to improve the learning of local features and address the class imbalance problem. Our proposed method outperforms the state-of-the-art methods. Our source code is available at https://github.com/WilkinsYang/CMAF.

Original languageEnglish
Article number124
JournalACM Transactions on Multimedia Computing, Communications and Applications
Volume20
Issue number5
DOIs
StatePublished - 11 Jan 2024

Keywords

  • Additional Key Words and PhrasesNeural networks
  • class imbalance
  • multi-modal fusion
  • sonar image

Fingerprint

Dive into the research topics of 'CMAF: Cross-Modal Augmentation via Fusion for Underwater Acoustic Image Recognition'. Together they form a unique fingerprint.

Cite this