TY - GEN
T1 - Efficient parallel algorithm for nonlinear dimensionality reduction on GPU
AU - Yeh, Tsung Tai
AU - Chen, Tseng Yi
AU - Chen, Yen Chiu
AU - Shih, Wei Kuan
PY - 2010
Y1 - 2010
N2 - Advances in nonlinear dimensionality reduction provide a way to understand and visualize the underlying structure of complex data sets. The performance of large-scale nonlinear dimensionality reduction is of key importance in data mining, machine learning, and data analysis. In this paper, we concentrate on improving the performance of nonlinear dimensionality reduction using large-scale data sets on the GPU. In particular, we focus on solving problems including k nearest neighbor (KNN) search and sparse spectral decomposition for large-scale data, and propose an efficient framework for Local Linear Embedding (LLE). We implement a k-d tree based KNN algorithm and Krylov subspace method on the GPU to accelerate the nonlinear dimensionality reduction for large-scale data. Our results enable GPU-based k-d tree LLE processes of up to about 30-60 X faster compared to the brute force KNN [10] LLE model on the CPU. Overall, our methods save O (n2-6n-2k-3) memory space.
AB - Advances in nonlinear dimensionality reduction provide a way to understand and visualize the underlying structure of complex data sets. The performance of large-scale nonlinear dimensionality reduction is of key importance in data mining, machine learning, and data analysis. In this paper, we concentrate on improving the performance of nonlinear dimensionality reduction using large-scale data sets on the GPU. In particular, we focus on solving problems including k nearest neighbor (KNN) search and sparse spectral decomposition for large-scale data, and propose an efficient framework for Local Linear Embedding (LLE). We implement a k-d tree based KNN algorithm and Krylov subspace method on the GPU to accelerate the nonlinear dimensionality reduction for large-scale data. Our results enable GPU-based k-d tree LLE processes of up to about 30-60 X faster compared to the brute force KNN [10] LLE model on the CPU. Overall, our methods save O (n2-6n-2k-3) memory space.
UR - http://www.scopus.com/inward/record.url?scp=77958593145&partnerID=8YFLogxK
U2 - 10.1109/GrC.2010.145
DO - 10.1109/GrC.2010.145
M3 - Conference contribution
AN - SCOPUS:77958593145
SN - 9780769541617
T3 - Proceedings - 2010 IEEE International Conference on Granular Computing, GrC 2010
SP - 592
EP - 597
BT - Proceedings - 2010 IEEE International Conference on Granular Computing, GrC 2010
T2 - 2010 IEEE International Conference on Granular Computing, GrC 2010
Y2 - 14 August 2010 through 16 August 2010
ER -