摘要
Human-robot handover is a key capability of service robots, such as those used to perform routine logistical tasks for healthcare workers. Recent algorithms have achieved tremendous advances in object-agnostic end-to-end planar grasping with up to six degrees of freedom (DoF); however, compiling the requisite datasets is simply not feasible in many situations and many users consider the use of camera feeds invasive. This letter presents an end-to-end control system for the visual grasping of unseen objects with 6-DoF without infringing on the privacy or personal space of human counterparts. In experiments, the proposed Fed-HANet system trained using the federated learning framework achieved accuracy close to that of centralized non-privacy-preserving systems, while outperforming baseline methods that rely on fine-tuning. We also explores the use of a depth-only method and compares its performance to a state-of-the-art method, but ultimately emphasizes the importance of using RGB inputs for better grasp success. The practical applicability of the proposed system in a robotic system was assessed in a user study involving 12 participants. The dataset for training and all pretrained models are available at https://arg-nctu.github.io/projects/fed-hanet.html.
原文 | English |
---|---|
頁(從 - 到) | 3772-3779 |
頁數 | 8 |
期刊 | IEEE Robotics and Automation Letters |
卷 | 8 |
發行號 | 6 |
DOIs | |
出版狀態 | Published - 1 6月 2023 |