摘要
The traditional data analysis and prediction method assumes that data distribution is normal and will not change. Therefore, it can predict unlabeled data by analyzing the static and historical data. However, in today's big-data environment, which is changing frequently, the traditional approaches can no longer be effective, as they cannot handle concept drift problems in a Dynamic Data Driven Application System (DDDAS). This study proposes a parallel detection and prediction method for concept drift problems in DDDAS. The proposed method can detect dynamic and changing data, and then feedback to the prediction model to revise for better subsequent predictions. Furthermore, this method computes a global prediction result by aggregating local predictions in the resource bounded environment. Therefore, the prediction accuracy increases, and the computation time decreases. In the simulation, the Map-Reduce technology is used for parallel processing. The simulation results show that the prediction accuracy is raised by 14, and the execution time is improved by almost 45.
原文 | English |
---|---|
頁(從 - 到) | 1413-1426 |
頁數 | 14 |
期刊 | Journal of Intelligent and Fuzzy Systems |
卷 | 32 |
發行號 | 2 |
DOIs | |
出版狀態 | Published - 2017 |