TY - GEN
T1 - Who takes what
T2 - 2019 International Conference on Robotics and Automation, ICRA 2019
AU - Kao, Hsin Wei
AU - Ke, Ting Yuan
AU - Lin, Ching-Ju
AU - Tseng, Yu-Chee
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/5/20
Y1 - 2019/5/20
N2 - Advanced Internet of Things (IoT) techniques have made human-environment interaction much easier. Existing solutions usually enable such interactions without knowing the identities of action performers. However, identifying users who are interacting with environments is a key to enable personalized service. To provide such add-on service, we propose WTW (who takes what), a system that identifies which user takes what object. Unlike traditional vision-based approaches, which are typically vulnerable to blockage, our WTW combines the feature information of three types of data, i.e., images, skeletons and IMU data, to enable reliable user-object matching and identification. By correlating the moving trajectory of a user monitored by inertial sensors with the movement of an object recorded in the video, our WTW reliably identifies a user and matches him/her with the object on action. Our prototype evaluation shows that WTW achieves a recognition rate of over 90% even in a crowd. The system is reliable even when users locate close by and take objects roughly at the same time.
AB - Advanced Internet of Things (IoT) techniques have made human-environment interaction much easier. Existing solutions usually enable such interactions without knowing the identities of action performers. However, identifying users who are interacting with environments is a key to enable personalized service. To provide such add-on service, we propose WTW (who takes what), a system that identifies which user takes what object. Unlike traditional vision-based approaches, which are typically vulnerable to blockage, our WTW combines the feature information of three types of data, i.e., images, skeletons and IMU data, to enable reliable user-object matching and identification. By correlating the moving trajectory of a user monitored by inertial sensors with the movement of an object recorded in the video, our WTW reliably identifies a user and matches him/her with the object on action. Our prototype evaluation shows that WTW achieves a recognition rate of over 90% even in a crowd. The system is reliable even when users locate close by and take objects roughly at the same time.
UR - http://www.scopus.com/inward/record.url?scp=85071429283&partnerID=8YFLogxK
U2 - 10.1109/ICRA.2019.8793858
DO - 10.1109/ICRA.2019.8793858
M3 - Conference contribution
AN - SCOPUS:85071429283
T3 - Proceedings - IEEE International Conference on Robotics and Automation
SP - 8063
EP - 8069
BT - 2019 International Conference on Robotics and Automation, ICRA 2019
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 20 May 2019 through 24 May 2019
ER -