TY - GEN
T1 - Fusing multi-sensory data for precision indoor localization
AU - Chiang, Ting Hui
AU - Shiu, Huan Ruei
AU - Erol-Kantarci, Melike
AU - Tseng, Yu Chee
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2020/6
Y1 - 2020/6
N2 - Indoor localization is a fundamental issue in IoT (Internet of Things). On the other hand, IoT provides a lot of networked devices that would help increase the precision of indoor localization. Particle Filter (PF) is widely used in indoor localization due to its flexibility that can adapt to different, and usually complex, indoor floorplans and furniture placements. In this work, we consider the fusion of multi-sensory data using PF. We focus on three types of popular sensors: IM (inertial measurement) sensor, RF (radio frequency) sensor, and environmental visual sensor. In particular, with environmental visual sensors, there is no extra device to be attached to localized targets. We propose a PF model that can adopt these types of sensory inputs. We show that in scenarios where visual sensory inputs are available, sub-meter precision can be achieved and in places with no visual coverage, seamless localization with reasonable precision can be supported by other sensors. Field trial results are presented, which show that our model is quite suitable for areas like lobby, corridor, and meeting room.
AB - Indoor localization is a fundamental issue in IoT (Internet of Things). On the other hand, IoT provides a lot of networked devices that would help increase the precision of indoor localization. Particle Filter (PF) is widely used in indoor localization due to its flexibility that can adapt to different, and usually complex, indoor floorplans and furniture placements. In this work, we consider the fusion of multi-sensory data using PF. We focus on three types of popular sensors: IM (inertial measurement) sensor, RF (radio frequency) sensor, and environmental visual sensor. In particular, with environmental visual sensors, there is no extra device to be attached to localized targets. We propose a PF model that can adopt these types of sensory inputs. We show that in scenarios where visual sensory inputs are available, sub-meter precision can be achieved and in places with no visual coverage, seamless localization with reasonable precision can be supported by other sensors. Field trial results are presented, which show that our model is quite suitable for areas like lobby, corridor, and meeting room.
KW - Data fusion
KW - Indoor localization
KW - Wearable computing
KW - Wireless sensor network
UR - http://www.scopus.com/inward/record.url?scp=85090292599&partnerID=8YFLogxK
U2 - 10.1109/ICCWorkshops49005.2020.9145062
DO - 10.1109/ICCWorkshops49005.2020.9145062
M3 - Conference contribution
AN - SCOPUS:85090292599
T3 - 2020 IEEE International Conference on Communications Workshops, ICC Workshops 2020 - Proceedings
BT - 2020 IEEE International Conference on Communications Workshops, ICC Workshops 2020 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2020 IEEE International Conference on Communications Workshops, ICC Workshops 2020
Y2 - 7 June 2020 through 11 June 2020
ER -