Improving Tiny YOLO with Fewer Model Parameters

Yanwei Liu, Ching Wen Ma

研究成果: Conference contribution同行評審

3 引文 斯高帕斯(Scopus)

摘要

With the rapid development of convolutional neural networks (CNNs), there are a variety of techniques that can improve existing CNN models, including attention mechanisms, activation functions, and data augmentation. However, integrating these techniques can lead to a significant increase in the number of parameters and FLOPs. Here, we integrated Efficient Channel Attention Net(ECA-Net), Mish activation function, All Convolutional Net (ALL-CNN), and a twin detection head architecture into YOLOv4-tiny, yielding an AP50 of 44.2% on the MS COCO 2017 dataset. The proposed Attention ALL-CNN Twin Head YOLO (A2-YOLO) outperforms the original YOLOv4-tiny on the same dataset by 3.3% and reduces the model parameters by 7.26%. Source code is at https://github.com/e96031413/AA-YOLO

原文English
主出版物標題Proceedings - 2021 IEEE 7th International Conference on Multimedia Big Data, BigMM 2021
發行者Institute of Electrical and Electronics Engineers Inc.
頁面61-64
頁數4
ISBN(電子)9781665434140
DOIs
出版狀態Published - 2021
事件7th IEEE International Conference on Multimedia Big Data, BigMM 2021 - Taichung, 台灣
持續時間: 15 11月 202117 11月 2021

出版系列

名字Proceedings - 2021 IEEE 7th International Conference on Multimedia Big Data, BigMM 2021

Conference

Conference7th IEEE International Conference on Multimedia Big Data, BigMM 2021
國家/地區台灣
城市Taichung
期間15/11/2117/11/21

指紋

深入研究「Improving Tiny YOLO with Fewer Model Parameters」主題。共同形成了獨特的指紋。

引用此