Abstract
To provide larger memory space with lower costs, NVDIMM is a production-ready device. However, directly placing NVDIMM as the main memory would seriously degrade the system performance because of the 'great memory wall' caused by the fact that in NVDIMM, the slow memory (e.g., flash memory) is several orders of magnitude slower than the fast memory (e.g., DRAM). In this article, we present a joint management framework of host/CPU and NVDIMM to break down the great memory wall by bridging the process information gap between host/CPU and NVDIMM. In this framework, a page semantic-aware strategy is proposed to precisely predict, mark, and relocate data or memory pages to the fast memory in advance by exploiting the process access patterns, so that the frequency of the slow memory accesses can be further reduced. The proposed framework with the proposed strategy was evaluated with several well-known benchmarks and the results are encouraging.
Original language | English |
---|---|
Article number | 8950228 |
Pages (from-to) | 722-733 |
Number of pages | 12 |
Journal | IEEE Transactions on Computers |
Volume | 69 |
Issue number | 5 |
DOIs | |
State | Published - 1 May 2020 |
Keywords
- Deep learning
- high performance computing (HPC)
- hybrid memory
- large memory capacity
- memory wall
- NVDIMM
- process access behaviors
- process state diagram
- response time