A video painterly stylization using semantic segmentation

Der Lor Way*, Rong Jie Chang, Chin Chen Chang, Zen Chung Shih

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review


Most deep-learning-based style transfers methods for video extraction use features from only a single style image to perform texture synthesis. However, this does not allow users to be creative or selective. Moreover, styles are applied to both foreground objects and backgrounds. This paper presents a painterly style transfer algorithm for video based on semantic segmentation that can segment the foreground and background, enabling different stylizations. First, a fully convolutional neural network was constructed for semantic segmentation, and a GrabCut method with a dynamic bounding box was used to correct segments and refine contours and edges. Second, an enhanced motion estimation method was applied between the foreground and background objects. Third, style transfer was used to extract textures from a style image and perform texture synthesis on a content image while preserving the architecture of the content image. The proposed method not only improves the motion boundaries of optical flow but also rectifies discontinuous and irregular segmentation due to occlusion and shape deformation. Finally, the proposed method was evaluated on various videos.

Original languageEnglish
Pages (from-to)357-367
Number of pages11
JournalJournal of the Chinese Institute of Engineers, Transactions of the Chinese Institute of Engineers,Series A
Issue number4
StatePublished - 2022


  • deep learning
  • motion estimation
  • semantic segmentation
  • Stylization


Dive into the research topics of 'A video painterly stylization using semantic segmentation'. Together they form a unique fingerprint.

Cite this