Efficient parallel algorithm for nonlinear dimensionality reduction on GPU

Tsung Tai Yeh, Tseng Yi Chen, Yen Chiu Chen, Wei Kuan Shih

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

8 Scopus citations

Abstract

Advances in nonlinear dimensionality reduction provide a way to understand and visualize the underlying structure of complex data sets. The performance of large-scale nonlinear dimensionality reduction is of key importance in data mining, machine learning, and data analysis. In this paper, we concentrate on improving the performance of nonlinear dimensionality reduction using large-scale data sets on the GPU. In particular, we focus on solving problems including k nearest neighbor (KNN) search and sparse spectral decomposition for large-scale data, and propose an efficient framework for Local Linear Embedding (LLE). We implement a k-d tree based KNN algorithm and Krylov subspace method on the GPU to accelerate the nonlinear dimensionality reduction for large-scale data. Our results enable GPU-based k-d tree LLE processes of up to about 30-60 X faster compared to the brute force KNN [10] LLE model on the CPU. Overall, our methods save O (n2-6n-2k-3) memory space.

Original languageEnglish
Title of host publicationProceedings - 2010 IEEE International Conference on Granular Computing, GrC 2010
Pages592-597
Number of pages6
DOIs
StatePublished - 2010
Event2010 IEEE International Conference on Granular Computing, GrC 2010 - San Jose, CA, United States
Duration: 14 Aug 201016 Aug 2010

Publication series

NameProceedings - 2010 IEEE International Conference on Granular Computing, GrC 2010

Conference

Conference2010 IEEE International Conference on Granular Computing, GrC 2010
Country/TerritoryUnited States
CitySan Jose, CA
Period14/08/1016/08/10

Fingerprint

Dive into the research topics of 'Efficient parallel algorithm for nonlinear dimensionality reduction on GPU'. Together they form a unique fingerprint.

Cite this