Deeppear: Deep pose estimation and action recognition

You Ying Jhuang, Wen Jiin Tsai

研究成果: Conference contribution同行評審

1 引文 斯高帕斯(Scopus)

摘要

Human action recognition has been a popular issue recently because it can be applied in many applications such as intelligent surveillance systems, human-robot interaction, and autonomous vehicle control. Human action recognition using RGB video is a challenging task because the learning of actions is easily affected by the cluttered background. To cope with this problem, the proposed method estimates 3D human poses first which can help remove the cluttered background and focus on the human body. In addition to the human poses, the proposed method also utilizes appearance features nearby the predicted joints to make our action prediction context-aware. Instead of using 3D convolutional neural networks as many action recognition approaches did, the proposed method uses a two-stream architecture that aggregates the results from skeleton-based and appearance-based approaches to do action recognition. Experimental results show that the proposed method achieved state-of-the-art performance on NTU RGB+D which is a large-scale dataset for human action recognition.

原文English
主出版物標題Proceedings of ICPR 2020 - 25th International Conference on Pattern Recognition
發行者Institute of Electrical and Electronics Engineers Inc.
頁面7119-7125
頁數7
ISBN(電子)9781728188089
DOIs
出版狀態Published - 2020
事件25th International Conference on Pattern Recognition, ICPR 2020 - Virtual, Milan, 意大利
持續時間: 10 1月 202115 1月 2021

出版系列

名字Proceedings - International Conference on Pattern Recognition
ISSN(列印)1051-4651

Conference

Conference25th International Conference on Pattern Recognition, ICPR 2020
國家/地區意大利
城市Virtual, Milan
期間10/01/2115/01/21

指紋

深入研究「Deeppear: Deep pose estimation and action recognition」主題。共同形成了獨特的指紋。

引用此