Real-Time Information Fusion System Implementation Based on ARM-Based FPGA

Yu Hsiang Tsai, Yung Jhe Yan, Meng Hsin Hsiao, Tzu Yi Yu, Mang Ou-Yang*


研究成果: Article同行評審

1 引文 斯高帕斯(Scopus)


In this study, an information fusion system displayed fusion information on a transparent display by considering the relationships among the display, background exhibit, and user’s gaze direction. We used an ARM-based field-programmable gate array (FPGA) to perform virtual–real fusion of this system as well as evaluated the virtual–real fusion execution speed. The ARM-based FPGA used Intel® RealsenseTM D435i depth cameras to capture depth and color images of an observer and exhibit. The image data was received by the ARM side and fed to the FPGA side for real-time object detection. The FPGA accelerated the computation of the convolution neural networks to recognize observers and exhibits. In addition, a module performed by the FPGA was developed for rapid registration between the color and depth images. The module calculated the size and position of the information displayed on a transparent display according to the pixel coordinates and depth values of the human eye and exhibit. A personal computer with GPU RTX2060 performed information fusion in ~47 ms, whereas the ARM-based FPGA accomplished it in 25 ms. Thus, the fusion speed of the ARM-based FPGA was 1.8 times faster than on the computer.

期刊Applied Sciences (Switzerland)
出版狀態Published - 7月 2023


深入研究「Real-Time Information Fusion System Implementation Based on ARM-Based FPGA」主題。共同形成了獨特的指紋。