2-D Deep Learning Model on 3-D Image Segmentation

Chien Chia Lee, Hsien I. Lin

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Scopus citations

Abstract

For image segmentation of 3D objects, researchers recently focus on 3D image deep learning methods. These studies aim at developing robust and accurate deep learning models. However, 3D deep learning methods using point-cloud data are time-consuming. In this paper, we present a novel 3D image segmentation method based on a 2D deep learning model to achieve efficient segmentation performance. While using a single camera angle to collect object 3D information, we merely use the depth map and adopt a 2D deep-learning model to segment objects on the scene. We validated the proposed method by comparison with Point-Net. In our experiment, we provided an extensive comparison between Point-Net and our 2D deep-learning model. The result shows that our model had a similar accuracy but is much faster than Point-Net.

Original languageEnglish
Title of host publication2020 IEEE International Conference on Mechatronics and Automation, ICMA 2020
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages605-609
Number of pages5
ISBN (Electronic)9781728164151
DOIs
StatePublished - 13 Oct 2020
Event17th IEEE International Conference on Mechatronics and Automation, ICMA 2020 - Beijing, China
Duration: 13 Oct 202016 Oct 2020

Publication series

Name2020 IEEE International Conference on Mechatronics and Automation, ICMA 2020

Conference

Conference17th IEEE International Conference on Mechatronics and Automation, ICMA 2020
Country/TerritoryChina
CityBeijing
Period13/10/2016/10/20

Keywords

  • 2D deep learning model
  • Image segmentation
  • point-cloud data
  • Point-Net

Fingerprint

Dive into the research topics of '2-D Deep Learning Model on 3-D Image Segmentation'. Together they form a unique fingerprint.

Cite this