Improving Tiny YOLO with Fewer Model Parameters

Yanwei Liu, Ching Wen Ma

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

3 Scopus citations

Abstract

With the rapid development of convolutional neural networks (CNNs), there are a variety of techniques that can improve existing CNN models, including attention mechanisms, activation functions, and data augmentation. However, integrating these techniques can lead to a significant increase in the number of parameters and FLOPs. Here, we integrated Efficient Channel Attention Net(ECA-Net), Mish activation function, All Convolutional Net (ALL-CNN), and a twin detection head architecture into YOLOv4-tiny, yielding an AP50 of 44.2% on the MS COCO 2017 dataset. The proposed Attention ALL-CNN Twin Head YOLO (A2-YOLO) outperforms the original YOLOv4-tiny on the same dataset by 3.3% and reduces the model parameters by 7.26%. Source code is at https://github.com/e96031413/AA-YOLO

Original languageEnglish
Title of host publicationProceedings - 2021 IEEE 7th International Conference on Multimedia Big Data, BigMM 2021
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages61-64
Number of pages4
ISBN (Electronic)9781665434140
DOIs
StatePublished - 2021
Event7th IEEE International Conference on Multimedia Big Data, BigMM 2021 - Taichung, Taiwan
Duration: 15 Nov 202117 Nov 2021

Publication series

NameProceedings - 2021 IEEE 7th International Conference on Multimedia Big Data, BigMM 2021

Conference

Conference7th IEEE International Conference on Multimedia Big Data, BigMM 2021
Country/TerritoryTaiwan
CityTaichung
Period15/11/2117/11/21

Keywords

  • Deep Learning
  • Object Detection
  • YOLO

Fingerprint

Dive into the research topics of 'Improving Tiny YOLO with Fewer Model Parameters'. Together they form a unique fingerprint.

Cite this