Abstract
In this paper, we present an improved approach to transfer style for videos based on semantic segmentation. We segment foreground objects and background, and then apply different styles respectively. A fully convolutional neural network is used to perform semantic segmentation. We increase the reliability of the segmentation, and use the information of segmentation and the relationship between foreground objects and background to improve segmentation iteratively. We also use segmentation to improve optical flow, and apply different motion estimation methods between foreground objects and background. This improves the motion boundaries of optical flow, and solves the problems of incorrect and discontinuous segmentation caused by occlusion and shape deformation.
Original language | American English |
---|---|
Pages | 1-2 |
Number of pages | 2 |
DOIs | |
State | Published - 30 May 2018 |
Event | 2018 International Workshop on Advanced Image Technology, IWAIT 2018 - Chiang Mai, Thailand Duration: 7 Jan 2018 → 9 Jan 2018 |
Conference
Conference | 2018 International Workshop on Advanced Image Technology, IWAIT 2018 |
---|---|
Country/Territory | Thailand |
City | Chiang Mai |
Period | 7/01/18 → 9/01/18 |
Keywords
- Motion estimation
- Neural network
- Semantic segmentation
- Style transfer