Comparison between immersion-based and toboggan-based watershed image segmentation

Yung Chieh Lin*, Yu Pao Tsai, Yi Ping Hung, Zen-Chung Shih

*此作品的通信作者

研究成果: Article同行評審

73 引文 斯高帕斯(Scopus)

摘要

Watershed segmentation has recently become a popular tool for image segmentation. There are two approaches to implementing watershed segmentation: immersion approach and toboggan simulation. Conceptually, the immersion approach can be viewed as an approach that starts from low altitude to high altitude and the toboggan approach as an approach that starts from high altitude to low altitude. The former seemed to be more popular recently (e.g., Vincent and Soille), but the latter had its own supporters (e.g., Mortensen and Barrett). It was not clear whether the two approaches could lead to exactly the same segmentation result and which approach was more efficient. In this paper, we present two "order-invariant" algorithms for watershed segmentation, one based on the immersion approach and the other on the toboggan approach. By introducing a special RIDGE label to achieve the property of order-invariance, we find that the two conceptually opposite approaches can indeed obtain the same segmentation result. When running on a Pentium-III PC, both of our algorithms require only less than 1/30 s for a 256 × 256 image and 1/5 s for a 512 × 512 image, on average. What is more surprising is that the toboggan algorithm, which is less well known in the computer vision community, turns out to run faster than the immersion algorithm for almost all the test images we have used, especially when the image is large, say, 512 × 512 or larger. This paper also gives some explanation as to why the toboggan algorithm can be more efficient in most cases.

原文English
頁(從 - 到)632-640
頁數9
期刊IEEE Transactions on Image Processing
15
發行號3
DOIs
出版狀態Published - 1 3月 2006

指紋

深入研究「Comparison between immersion-based and toboggan-based watershed image segmentation」主題。共同形成了獨特的指紋。

引用此