Fed-HANet: Federated Visual Grasping Learning for Human Robot Handovers

Ching I. Huang, Yu Yen Huang, Jie Xin Liu, Yu Ting Ko, Hsueh Cheng Wang*, Kuang Hsing Chiang, Lap Fai Yu

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

Human-robot handover is a key capability of service robots, such as those used to perform routine logistical tasks for healthcare workers. Recent algorithms have achieved tremendous advances in object-agnostic end-to-end planar grasping with up to six degrees of freedom (DoF); however, compiling the requisite datasets is simply not feasible in many situations and many users consider the use of camera feeds invasive. This letter presents an end-to-end control system for the visual grasping of unseen objects with 6-DoF without infringing on the privacy or personal space of human counterparts. In experiments, the proposed Fed-HANet system trained using the federated learning framework achieved accuracy close to that of centralized non-privacy-preserving systems, while outperforming baseline methods that rely on fine-tuning. We also explores the use of a depth-only method and compares its performance to a state-of-the-art method, but ultimately emphasizes the importance of using RGB inputs for better grasp success. The practical applicability of the proposed system in a robotic system was assessed in a user study involving 12 participants. The dataset for training and all pretrained models are available at https://arg-nctu.github.io/projects/fed-hanet.html.

Original languageEnglish
Pages (from-to)3772-3779
Number of pages8
JournalIEEE Robotics and Automation Letters
Volume8
Issue number6
DOIs
StatePublished - 1 Jun 2023

Keywords

  • Federated learning
  • human-robot interaction
  • service robots

Fingerprint

Dive into the research topics of 'Fed-HANet: Federated Visual Grasping Learning for Human Robot Handovers'. Together they form a unique fingerprint.

Cite this