Abstract
This study aims to use an improved rotational region convolutional neural network (R2CNN) algorithm to detect the grasping bounding box for the robotic arm that reaches supermarket goods. This algorithm can calculate the final predicted grasping bounding box without any additional architecture, which significantly improves the speed of grasp inferences. In this study, we added the force-closure condition so that the final grasping bounding box could achieve grasping stability in a physical sense. We experimentally demonstrated that deep model-treated object detection and grasping detection are the same tasks. We used transfer learning to improve the prediction accuracy of the grasping bounding box. In particular, the ResNet-101 network weights, which were originally used in object detection, were used to continue training with the Cornell dataset. In terms of grasping detection, we used the trained model weights that were originally used in object detection as the features of the to-be-grasped objects and fed them to the network for continuous training. For 2828 test images, this method achieved nearly 98% accuracy and a speed of 14–17 frames per second.
Original language | English |
---|---|
Article number | 061005 |
Journal | Journal of Computing and Information Science in Engineering |
Volume | 24 |
Issue number | 6 |
DOIs | |
State | Published - 1 Jun 2024 |
Keywords
- force-closure
- grasping detection
- R2CNN
- robotic arm