TY - JOUR
T1 - DeepPrefetcher
T2 - A Deep Learning Framework for Data Prefetching in Flash Storage Devices
AU - Ganfure, Gaddisa Olani
AU - Wu, Chun Feng
AU - Chang, Yuan Hao
AU - Shih, Wei Kuan
N1 - Publisher Copyright:
© 1982-2012 IEEE.
PY - 2020/11
Y1 - 2020/11
N2 - In today's information-driven world, data access latency accounts for the expensive part of processing user requests. One potential solution to access latency is prefetching, a technique to speculate and move future requests closer to the processing unit. However, the block access requests received by the storage device show poor spatial locality because most file-related locality is absorbed in the higher layers of the memory hierarchy, including the CPU cache and main memory. Besides, the utilization of multithreading results in an interleaved access request making prefetching at the storage level more picky using existing prefetching techniques. Toward this, we propose and assess DeepPrefetcher, a novel deep neural network inspired context-aware prefetching method that adapts to arbitrary memory access patterns. DeepPrefetcher learns the block access pattern contexts using distributed representation and leverage long short-term memory learning model for context-aware data prefetching. Instead of using the logical block address (LBA) value directly, we model the difference between successive access requests, which contains more patterns than LBA value for modeling. By targeting access pattern sequence in this manner, the DeepPrefetcher can learn the vital context from a long input LBA sequence and learn to predict both the previously seen and unseen access patterns. The experimental result reveals that DeepPrefetcher can increase an average prefetch accuracy, coverage, and speedup by 21.5%, 19.5%, and 17.2%, respectively, contrasted with the baseline prefetching strategies. Overall, the proposed prefetching approach surpasses other schemes in all benchmarks, and the outcomes are promising.
AB - In today's information-driven world, data access latency accounts for the expensive part of processing user requests. One potential solution to access latency is prefetching, a technique to speculate and move future requests closer to the processing unit. However, the block access requests received by the storage device show poor spatial locality because most file-related locality is absorbed in the higher layers of the memory hierarchy, including the CPU cache and main memory. Besides, the utilization of multithreading results in an interleaved access request making prefetching at the storage level more picky using existing prefetching techniques. Toward this, we propose and assess DeepPrefetcher, a novel deep neural network inspired context-aware prefetching method that adapts to arbitrary memory access patterns. DeepPrefetcher learns the block access pattern contexts using distributed representation and leverage long short-term memory learning model for context-aware data prefetching. Instead of using the logical block address (LBA) value directly, we model the difference between successive access requests, which contains more patterns than LBA value for modeling. By targeting access pattern sequence in this manner, the DeepPrefetcher can learn the vital context from a long input LBA sequence and learn to predict both the previously seen and unseen access patterns. The experimental result reveals that DeepPrefetcher can increase an average prefetch accuracy, coverage, and speedup by 21.5%, 19.5%, and 17.2%, respectively, contrasted with the baseline prefetching strategies. Overall, the proposed prefetching approach surpasses other schemes in all benchmarks, and the outcomes are promising.
KW - Data prefetching
KW - deep learning
KW - flash storage devices
KW - logical block address (LBA)
UR - http://www.scopus.com/inward/record.url?scp=85096037489&partnerID=8YFLogxK
U2 - 10.1109/TCAD.2020.3012173
DO - 10.1109/TCAD.2020.3012173
M3 - Article
AN - SCOPUS:85096037489
SN - 0278-0070
VL - 39
SP - 3311
EP - 3322
JO - IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
JF - IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
IS - 11
M1 - 9211554
ER -