Abstract
In this paper, we present a robust method to estimate the three-dimensional ego-motion of an observer moving in a static environment. This method combines the optical flow fields observed with multiple cameras to avoid the ambiguity of 3-D motion recovery due to small field of view and small depth variation in the field of view. Two residual functions are proposed to estimate the ego-motion for different situations. In the non-degenerate case, both the direction and the scale of the three-dimensional rotation and translation can be obtained. In the degenerate case, rotation can still be obtained but translation can only be obtained up to a scale factor. Both the number of cameras and the camera placement affect the accuracy of the estimated ego-motion. We compare different camera configurations through simulation. Some results of real-world experiments are also given to demonstrate the benefits of our method.
Original language | English |
---|---|
Pages (from-to) | 1573-1583 |
Number of pages | 11 |
Journal | Pattern Recognition |
Volume | 34 |
Issue number | 8 |
DOIs | |
State | Published - 1 Aug 2001 |
Keywords
- Ego-motion estimation
- Multiple sensors
- Optical flow