TY - JOUR
T1 - Dynamic Parallel Machine Scheduling With Deep Q-Network
AU - Liu, Chien Liang
AU - Tseng, Chun Jan
AU - Huang, Tzu Hsuan
AU - Wang, Jhih Wun
N1 - Publisher Copyright:
© 2013 IEEE.
PY - 2023/11/1
Y1 - 2023/11/1
N2 - Parallel machine scheduling (PMS) is a common setting in many manufacturing facilities, in which each job is allowed to be processed on one of the machines of the same type. It involves scheduling n jobs on m machines to minimize certain objective functions. For preemptive scheduling, most problems are not only NP-hard but also difficult in practice. Moreover, many unexpected events, such as machine failure and requirement change, are inevitable in the practical production process, meaning that rescheduling is required for static scheduling methods. Deep reinforcement learning (DRL), which combines deep learning and reinforcement learning, has achieved promising results in several domains and has shown the potential to solve large Markov decision process (MDP) optimization tasks. Moreover, PMS problems can be formulated as an MDP problem, inspiring us to devise a DRL method to deal with PMS problems in a dynamic environment. We develop a novel DRL-based PMS method, called DPMS, in which the developed model considers the characteristics of PMS to design states and the reward. The actions involve dispatching rules, so DPMS can be considered a meta-dispatching-rule system that can efficiently select a sequence of dispatching rules based on the current environment or unexpected events. The experimental results demonstrate that DPMS can yield promising results in a dynamic environment by learning from the interactions between the agent and the environment. Furthermore, we conduct extensive experiments to analyze DPMS in the context of developing a DRL to deal with dynamic PMS problems.
AB - Parallel machine scheduling (PMS) is a common setting in many manufacturing facilities, in which each job is allowed to be processed on one of the machines of the same type. It involves scheduling n jobs on m machines to minimize certain objective functions. For preemptive scheduling, most problems are not only NP-hard but also difficult in practice. Moreover, many unexpected events, such as machine failure and requirement change, are inevitable in the practical production process, meaning that rescheduling is required for static scheduling methods. Deep reinforcement learning (DRL), which combines deep learning and reinforcement learning, has achieved promising results in several domains and has shown the potential to solve large Markov decision process (MDP) optimization tasks. Moreover, PMS problems can be formulated as an MDP problem, inspiring us to devise a DRL method to deal with PMS problems in a dynamic environment. We develop a novel DRL-based PMS method, called DPMS, in which the developed model considers the characteristics of PMS to design states and the reward. The actions involve dispatching rules, so DPMS can be considered a meta-dispatching-rule system that can efficiently select a sequence of dispatching rules based on the current environment or unexpected events. The experimental results demonstrate that DPMS can yield promising results in a dynamic environment by learning from the interactions between the agent and the environment. Furthermore, we conduct extensive experiments to analyze DPMS in the context of developing a DRL to deal with dynamic PMS problems.
KW - deep Q-network (DQN)
KW - Deep reinforcement learning (DRL)
KW - DRL-based PMS (DPMS)
KW - Markov decision process (MDP)
KW - parallel machine scheduling (PMS)
UR - http://www.scopus.com/inward/record.url?scp=85165280904&partnerID=8YFLogxK
U2 - 10.1109/TSMC.2023.3289322
DO - 10.1109/TSMC.2023.3289322
M3 - Article
AN - SCOPUS:85165280904
SN - 2168-2216
VL - 53
SP - 6792
EP - 6804
JO - IEEE Transactions on Systems, Man, and Cybernetics: Systems
JF - IEEE Transactions on Systems, Man, and Cybernetics: Systems
IS - 11
ER -