摘要
The use of a graphics processing unit (GPU) together with a CPU, referred as GPU-Accelerated computing, to accelerate tasks that requires extensive computations has been the trends for last a few years in high performance computing. In this paper, we propose a new paradigm of GPU-Accelerated method to parallelize extraction of a set of features based on the gray-level co-occurrence matrix (GLCM), which may be the most widely, used method. The method is evaluated on various GPU devices and compared with its serial counterpart implemented and optimized in both Matlab and C on a single machine. A series of experimental tests focused on magnetic resonance (MR) brain images demonstrate that the proposed method is very efficient and superior to its serial counterpart, as it could achieve more than 25-105 folds of speedup for single precision and more than 15-85 folds of speedup for double precision on Geforce GTX 1080 along different size of ROIs.
原文 | English |
---|---|
文章編號 | 8049449 |
頁(從 - 到) | 22634-22646 |
頁數 | 13 |
期刊 | IEEE Access |
卷 | 5 |
DOIs | |
出版狀態 | Published - 23 9月 2017 |