Abstract
Underwater image recognition is crucial for underwater detection applications. Fish classification has been one of the emerging research areas in recent years. Existing image classification models usually classify data collected from terrestrial environments. However, existing image classification models trained with terrestrial data are unsuitable for underwater images, as identifying underwater data is challenging due to their incomplete and noisy features. To address this, we propose a cross-modal augmentation via fusion (CMAF) framework for acoustic-based fish image classification. Our approach involves separating the process into two branches: visual modality and sonar signal modality, where the latter provides a complementary character feature. We augment the visual modality, design an attention-based fusion module, and adopt a masking-based training strategy with a mask-based focal loss to improve the learning of local features and address the class imbalance problem. Our proposed method outperforms the state-of-the-art methods. Our source code is available at https://github.com/WilkinsYang/CMAF.
Original language | English |
---|---|
Article number | 124 |
Journal | ACM Transactions on Multimedia Computing, Communications and Applications |
Volume | 20 |
Issue number | 5 |
DOIs | |
State | Published - 11 Jan 2024 |
Keywords
- Additional Key Words and PhrasesNeural networks
- class imbalance
- multi-modal fusion
- sonar image