Scalable power management using multilevel reinforcement learning for multiprocessors

Gung Yu Pan*, Jing Yang Jou, Bo-Cheng Lai

*此作品的通信作者

    研究成果: Article同行評審

    11 引文 斯高帕斯(Scopus)

    摘要

    Dynamic power management has become an imperative design factor to attain the energy efficiency in modern systems. Among various power management schemes, learning-based policies that are adaptive to different environments and applications have demonstrated superior performance to other approaches. However, they suffer the scalability problem for multiprocessors due to the increasing number of cores in a system. In this article, we propose a scalable and effective online policy called MultiLevel Reinforcement Learning (MLRL). By exploiting the hierarchical paradigm, the time complexity of MLRL is O(nlg n) for n cores and the convergence rate is greatly raised by compressing redundant searching space. Some advanced techniques, such as the function approximation and the action selection scheme, are included to enhance the generality and stability of the proposed policy. By simulating on the SPLASH-2 benchmarks, MLRL runs 53% faster and outperforms the state-of-the-Art work with 13.6% energy saving and 2.7% latency penalty on average. The generality and the scalability of MLRL are also validated through extensive simulations.

    原文English
    文章編號33
    期刊ACM Transactions on Design Automation of Electronic Systems
    19
    發行號4
    DOIs
    出版狀態Published - 1 1月 2014

    指紋

    深入研究「Scalable power management using multilevel reinforcement learning for multiprocessors」主題。共同形成了獨特的指紋。

    引用此