Abstract
Automatic human gesture recognition from camera images is an interesting topic for developing intelligent vision systems. In this paper, we propose a convolution neural network (CNN) method to recognize hand gestures of human task activities from a camera image. To achieve the robustness performance, the skin model and the calibration of hand position and orientation are applied to obtain the training and testing data for the CNN. Since the light condition seriously affects the skin color, we adopt a Gaussian Mixture model (GMM) to train the skin model which is used to robustly filter out non-skin colors of an image. The calibration of hand position and orientation aims at translating and rotating the hand image to a neutral pose. Then the calibrated images are used to train the CNN. In our experiment, we provided a validation of the proposed method on recognizing human gestures which shows robust results with various hand positions and orientations and light conditions. Our experimental evaluation of seven subjects performing seven hand gestures with average recognition accuracies around 95.96% shows the feasibility and reliability of the proposed method.
Original language | English |
---|---|
Article number | 6899454 |
Pages (from-to) | 1038-1043 |
Number of pages | 6 |
Journal | IEEE International Conference on Automation Science and Engineering |
Volume | 2014-January |
DOIs | |
State | Published - 2014 |
Event | 2014 IEEE International Conference on Automation Science and Engineering, CASE 2014 - Taipei, Taiwan Duration: 18 Aug 2014 → 22 Aug 2014 |
Keywords
- convolution neural network (CNN)
- Gaussian Mixture model (GMM)
- Human gesture recognition
- skin model
- the calibration of hand orientation