There are two main problems that make hand gesture tracking especially difficult. One is the great number of degrees of freedom of the hand and the other one is the rapid movements that we make in natural gestures. Algorithms based on minimizing an objective function, with a good initialization, typically obtain good accuracy at low frame rates. However, these methods are very dependent on the initialization point, and fast movements on the hand position or gesture, provokes a lost of track which are unable to recover. We present a method that uses deep learning to train a set of gestures (81 gestures), that will be used as a rough estimate of the hand pose and orientation. This will serve to a registration of non rigid model algorithm that will find the parameters of hand, even when temporal assumption of smooth movements of hands is violated. To evaluate our proposed algorithm, different experiments are performed with some real sequences recorded with Intel depth sensor to demonstrate the performance in a real scenario.