Three-dimensional ego-motion estimation from motion fields observed with multiple cameras

Yong-Sheng Chen, Lin Gwo Liou, Yi Ping Hung*, Chiou Shann Fuh

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

14 Scopus citations


In this paper, we present a robust method to estimate the three-dimensional ego-motion of an observer moving in a static environment. This method combines the optical flow fields observed with multiple cameras to avoid the ambiguity of 3-D motion recovery due to small field of view and small depth variation in the field of view. Two residual functions are proposed to estimate the ego-motion for different situations. In the non-degenerate case, both the direction and the scale of the three-dimensional rotation and translation can be obtained. In the degenerate case, rotation can still be obtained but translation can only be obtained up to a scale factor. Both the number of cameras and the camera placement affect the accuracy of the estimated ego-motion. We compare different camera configurations through simulation. Some results of real-world experiments are also given to demonstrate the benefits of our method.

Original languageEnglish
Pages (from-to)1573-1583
Number of pages11
JournalPattern Recognition
Issue number8
StatePublished - 1 Aug 2001


  • Ego-motion estimation
  • Multiple sensors
  • Optical flow


Dive into the research topics of 'Three-dimensional ego-motion estimation from motion fields observed with multiple cameras'. Together they form a unique fingerprint.

Cite this