Positioning Unit Cell Model Duplication with Residual Concatenation Neural Network (RCNN) and Transfer Learning for Visible Light Positioning (VLP)

Dong Chang Lin, Chi Wai Chow*, Ching Wei Peng, Tun Yao Hung, Yun Han Chang, Shao Hua Song, Yun Shen Lin, Yang Liu, Kun Hsien Lin

*此作品的通信作者

研究成果: Article同行評審

23 引文 斯高帕斯(Scopus)

摘要

Machine-learning (ML) can be employed to enhance the positioning accuracy of visible-light-positioning (VLP) system. To diminish the training time and complexity, the whole area is usually divided into several positioning unit cells. Most literatures only focus on the positioning performance within an unit cell, and assume the unit cell can be repeatedly duplicated to cover the whole area. In this work, we propose and demonstrate a positioning unit cell model duplication scheme, named as spatial sequence adaptation (SSA) process. We also propose and demonstrate a residual concatenation neural network (RCNN) and transfer learning (TL) to refine the model of the target positioning unit cell. A practical test-bed with vertical distance of 2.8 m consisting of two unit cells with dimensions of about 1.55 m × 2 m per cell is constructed. The client side is an autonomous mobile robot (AMR) for acquiring continuous training and testing data. Our experimental results reveal that high precision positioning in the duplicated unit cell duplication can be achieved.

原文English
頁(從 - 到)6366-6372
頁數7
期刊Journal of Lightwave Technology
39
發行號20
DOIs
出版狀態Published - 15 10月 2021

指紋

深入研究「Positioning Unit Cell Model Duplication with Residual Concatenation Neural Network (RCNN) and Transfer Learning for Visible Light Positioning (VLP)」主題。共同形成了獨特的指紋。

引用此