Enhancing Robotic Grasping Detection Accuracy With the R2CNN Algorithm and Force-Closure

Hsien I. Lin*, Muhammad Ahsan Fatwaddin Shodiq, Hong Qi Chu

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

This study aims to use an improved rotational region convolutional neural network (R2CNN) algorithm to detect the grasping bounding box for the robotic arm that reaches supermarket goods. This algorithm can calculate the final predicted grasping bounding box without any additional architecture, which significantly improves the speed of grasp inferences. In this study, we added the force-closure condition so that the final grasping bounding box could achieve grasping stability in a physical sense. We experimentally demonstrated that deep model-treated object detection and grasping detection are the same tasks. We used transfer learning to improve the prediction accuracy of the grasping bounding box. In particular, the ResNet-101 network weights, which were originally used in object detection, were used to continue training with the Cornell dataset. In terms of grasping detection, we used the trained model weights that were originally used in object detection as the features of the to-be-grasped objects and fed them to the network for continuous training. For 2828 test images, this method achieved nearly 98% accuracy and a speed of 14–17 frames per second.

Original languageEnglish
Article number061005
JournalJournal of Computing and Information Science in Engineering
Volume24
Issue number6
DOIs
StatePublished - 1 Jun 2024

Keywords

  • force-closure
  • grasping detection
  • R2CNN
  • robotic arm

Fingerprint

Dive into the research topics of 'Enhancing Robotic Grasping Detection Accuracy With the R2CNN Algorithm and Force-Closure'. Together they form a unique fingerprint.

Cite this