Using Machine Learning and Light Spatial Sequence Arrangement for Copying Positioning Unit Cell to Reduce Training Burden in Visible Light Positioning (VLP)

Li Sheng Hsu, Dong Chang Lin, Chi Wai Chow, Tun Yao Hung, Yun Han Chang, Ching Wei Peng, Yang Liu, Chien Hung Yeh, Kun Hsien Lin

研究成果: Conference contribution同行評審

1 引文 斯高帕斯(Scopus)

摘要

Machine learning (ML) can improve the positioning accuracy in visible-light-positioning (VLP) system. To reduce the training time and complexity, the first step is to divide the whole positioning area into many positioning unit cells. The second step is to train one positioning unit cell; and then copy the 'trained' unit cell model to other un-trained 'target' unit cell. Here, we show that just copying and applying the 'trained' ML model to other unit cells will produce high positioning errors. We propose and demonstrate a light spatial sequence arrangement (LSSA) scheme together with second order linear regression (LR) ML algorithm to copy a 'trained' unit cell to a 'target' unit cell. A practical test-bed is constructed. By applying the proposed scheme, the average positioning error is significantly reduced by 90.71%, while the training burden is significantly reduced since there is no need of training in the 'target' unit cell.

原文English
主出版物標題2021 30th Wireless and Optical Communications Conference, WOCC 2021
發行者Institute of Electrical and Electronics Engineers Inc.
頁面106-109
頁數4
ISBN(電子)9781665427722
DOIs
出版狀態Published - 2021
事件30th Wireless and Optical Communications Conference, WOCC 2021 - Taipei, 台灣
持續時間: 7 10月 20218 10月 2021

出版系列

名字2021 30th Wireless and Optical Communications Conference, WOCC 2021

Conference

Conference30th Wireless and Optical Communications Conference, WOCC 2021
國家/地區台灣
城市Taipei
期間7/10/218/10/21

指紋

深入研究「Using Machine Learning and Light Spatial Sequence Arrangement for Copying Positioning Unit Cell to Reduce Training Burden in Visible Light Positioning (VLP)」主題。共同形成了獨特的指紋。

引用此