UPANets: Learning from the Universal Pixel Attention Neworks

Ching Hsun Tseng, Shin Jye Lee*, Jianan Feng, Shengzhong Mao, Yu Ping Wu, Jia Yu Shang, Xiao Jun Zeng

*此作品的通信作者

研究成果: Article同行評審

2 引文 斯高帕斯(Scopus)

摘要

With the successful development in computer vision, building a deep convolutional neural network (CNNs) has been mainstream, considering the character of shared parameters in a convolutional layer. Stacking convolutional layers into a deep structure improves performance, but over-stacking also ramps up the needed resources for GPUs. Seeing another surge of Transformers in computer vision, the issue has aroused severely. A resource-hungry model is hardly implemented for limited hardware or single-customers-based GPU. Therefore, this work focuses on these concerns and proposes an efficient but robust backbone, which equips with channel and spatial direction attentions, so the attentions help to expand receptive fields in shallow convolutional layers and pass the information to every layer. An attention-boosted network based on already efficient CNNs, Universal Pixel Attention Networks (UPANets), is proposed. Through a series of experiments, UPANets fulfil the purposes of learning global information with less needed resources and outshine many existing SOTAs in CIFAR-{10, 100}.

原文English
文章編號1243
期刊Entropy
24
發行號9
DOIs
出版狀態Published - 9月 2022

指紋

深入研究「UPANets: Learning from the Universal Pixel Attention Neworks」主題。共同形成了獨特的指紋。

引用此