A video painterly stylization using semantic segmentation

Der Lor Way*, Rong Jie Chang, Chin Chen Chang, Zen Chung Shih

*此作品的通信作者

研究成果: Article同行評審

摘要

Most deep-learning-based style transfers methods for video extraction use features from only a single style image to perform texture synthesis. However, this does not allow users to be creative or selective. Moreover, styles are applied to both foreground objects and backgrounds. This paper presents a painterly style transfer algorithm for video based on semantic segmentation that can segment the foreground and background, enabling different stylizations. First, a fully convolutional neural network was constructed for semantic segmentation, and a GrabCut method with a dynamic bounding box was used to correct segments and refine contours and edges. Second, an enhanced motion estimation method was applied between the foreground and background objects. Third, style transfer was used to extract textures from a style image and perform texture synthesis on a content image while preserving the architecture of the content image. The proposed method not only improves the motion boundaries of optical flow but also rectifies discontinuous and irregular segmentation due to occlusion and shape deformation. Finally, the proposed method was evaluated on various videos.

指紋

深入研究「A video painterly stylization using semantic segmentation」主題。共同形成了獨特的指紋。

引用此