Fed-HANet: Federated Visual Grasping Learning for Human Robot Handovers

Ching I. Huang, Yu Yen Huang, Jie Xin Liu, Yu Ting Ko, Hsueh Cheng Wang*, Kuang Hsing Chiang, Lap Fai Yu

*此作品的通信作者

研究成果: Article同行評審

5 引文 斯高帕斯(Scopus)

摘要

Human-robot handover is a key capability of service robots, such as those used to perform routine logistical tasks for healthcare workers. Recent algorithms have achieved tremendous advances in object-agnostic end-to-end planar grasping with up to six degrees of freedom (DoF); however, compiling the requisite datasets is simply not feasible in many situations and many users consider the use of camera feeds invasive. This letter presents an end-to-end control system for the visual grasping of unseen objects with 6-DoF without infringing on the privacy or personal space of human counterparts. In experiments, the proposed Fed-HANet system trained using the federated learning framework achieved accuracy close to that of centralized non-privacy-preserving systems, while outperforming baseline methods that rely on fine-tuning. We also explores the use of a depth-only method and compares its performance to a state-of-the-art method, but ultimately emphasizes the importance of using RGB inputs for better grasp success. The practical applicability of the proposed system in a robotic system was assessed in a user study involving 12 participants. The dataset for training and all pretrained models are available at https://arg-nctu.github.io/projects/fed-hanet.html.

原文English
頁(從 - 到)3772-3779
頁數8
期刊IEEE Robotics and Automation Letters
8
發行號6
DOIs
出版狀態Published - 1 6月 2023

指紋

深入研究「Fed-HANet: Federated Visual Grasping Learning for Human Robot Handovers」主題。共同形成了獨特的指紋。

引用此