Efficient Video Matting on Human Video Clips for Real-Time Application

Chao Liang Yu*, I. Chen Lin

*此作品的通信作者

研究成果: Conference contribution同行評審

摘要

This paper presents an efficient and effective matting framework for human video clips. To alleviate the inefficiency problem in existing models, we propose using a refiner dedicated to error-prone regions, and reduce the computation at higher resolutions, so the proposed framework can achieve real-time performance for 1080p 60fps videos. Also, with the recurrent architecture, our model is aware of temporal information and produces temporally more consistent matting results compared to models processing each frame individually. Moreover, it contains a module for capturing semantic information. That makes our model easy to use without troublesome setup, such as annotating trimaps or other additional inputs. Experiments show that our proposed method outperforms previous matting methods, and reaches the state of the art on the VideoMatte240K dataset.

原文English
主出版物標題Proceedings - 2023 IEEE International Conference on Multimedia and Expo, ICME 2023
發行者IEEE Computer Society
頁面2165-2170
頁數6
ISBN(電子)9781665468916
DOIs
出版狀態Published - 2023
事件2023 IEEE International Conference on Multimedia and Expo, ICME 2023 - Brisbane, 澳大利亞
持續時間: 10 7月 202314 7月 2023

出版系列

名字Proceedings - IEEE International Conference on Multimedia and Expo
2023-July
ISSN(列印)1945-7871
ISSN(電子)1945-788X

Conference

Conference2023 IEEE International Conference on Multimedia and Expo, ICME 2023
國家/地區澳大利亞
城市Brisbane
期間10/07/2314/07/23

指紋

深入研究「Efficient Video Matting on Human Video Clips for Real-Time Application」主題。共同形成了獨特的指紋。

引用此