Object-based land cover classification using airborne lidar and different spectral images

Tee-Ann Teo*, Chun Hsuan Huang

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

7 Scopus citations


Both land cover spectral information and 3D surface information can be obtained efficiently via remote sensing technologies. Spectral images provide spectral features whereas lidar point clouds contain 3D spatial features. Therefore, the multi-sensor data can be integrated to obtain useful information for different applications. This study integrates lidar with different spectral features for land cover classification. Because different spectral images have different characteristics, this study used hyperspectral images, 4-And 8-band WorldView-2 multispectral images, to distinguish different land covers. The major works include features selection, object-based classification, and evaluation. In features selection appropriate features were selected according to the land cover characteristics. Object-based classification was implemented using image segmentation and supervised classification. Finally, different combinations were evaluated using reference data to provide comprehensive analyses. We use ITRES CASI-1500 airborne hyperspectral images, WorldView-2 multispectral images and Optech ALTM Pegasus in this study. The experiment compared the results with and without data fusion. The importance of different spectral features is also discussed. In summary, different land covers with similar spectral features can be identified using lidar spatial features. Spectral image integration with lidar data may improve land cover classification accuracy.

Original languageEnglish
Pages (from-to)491-504
Number of pages14
JournalTerrestrial, Atmospheric and Oceanic Sciences
Issue number4
StatePublished - 1 Aug 2016


  • Hyperspectral image
  • Lidar
  • Object-based classification
  • Worldview-2


Dive into the research topics of 'Object-based land cover classification using airborne lidar and different spectral images'. Together they form a unique fingerprint.

Cite this