3D SOC-Net: Deep 3D reconstruction network based on self-organizing clustering mapping

Y. S. Gan, Weihao Chen, Wei Chuen Yau*, Ziyun Zou, Sze Teng Liong, Shih Yuan Wang

*此作品的通信作者

研究成果: Article同行評審

4 引文 斯高帕斯(Scopus)

摘要

Image-based 3D reconstruction from a single-view image is critical and fundamental in many areas and can be integrated into many applications to provide useful functions. However, there are several crucial difficulties and challenges in accomplishing this process. For example, the issues of self-occlusion and lack of object information in a different perspective of viewing. Thus, the quality of the generated 3D shape from a single-view image may not be satisfactory and robust, hence affecting its feasibility in further applications. Conventionally, the 3D reconstruction process requires multiple input images such that the context of the target object can be fully conveyed. In this paper, we propose a new and simple, yet powerful framework that improves the quality of the generated point cloud from a single-view image. Concretely, the significant representatives are first discovered and selected by adopting a network architecture that contains both encoder and decoder models. Finally, the resultant point clouds are obtained by extracting the mean shape using the methods of Chamfer Distance (CD), Earth Mover's Distance (EMD), and Self-Organizing Map (SOM). As a result, the proposed algorithm is capable to demonstrate its robustness and effectiveness when compared to state-of-the-art 3D reconstruction methods. The best mean loss exhibited is 4.45 when evaluated on 12 classes in the ShapeNetCoreV1 dataset. In addition, qualitative results are presented to further verify the reliability of the proposed method.

原文English
文章編號119209
期刊Expert Systems with Applications
213
DOIs
出版狀態Published - 1 3月 2023

指紋

深入研究「3D SOC-Net: Deep 3D reconstruction network based on self-organizing clustering mapping」主題。共同形成了獨特的指紋。

引用此