TY - JOUR
T1 - AI for AI-based intrusion detection as a service
T2 - Reinforcement learning to configure models, tasks, and capacities
AU - Lin, Ying Dar
AU - Huang, Hao Xuan
AU - Sudyana, Didik
AU - Lai, Yuan Cheng
N1 - Publisher Copyright:
© 2024 Elsevier Ltd
PY - 2024/9
Y1 - 2024/9
N2 - Intrusion Detection Systems (IDS) increasingly leverage machine learning (ML) to enhance the detection of zero-day attacks. As operational complexities increase, enterprises are turning to Intrusion Detection as a Service (IDaS), requiring advanced solutions for efficient ML model selection and resource allocation. Existing research often focuses primarily on accuracy and computational efficiency, leaving a gap in solutions that can dynamically adapt. This study introduces a novel integrated solution, Auto-IDaS, which employs advanced Reinforcement Learning (RL) techniques for real-time, adaptive management of IDS. Auto-IDaS uses the Deep Q-Network (DQN) algorithm for dynamic ML model selection, automatically adjusting configurations of IDaS in response to fluctuating network traffic conditions. Simultaneously, it utilizes the Twin Delayed Deep Deterministic (TD3) algorithm for optimizing capacity allocation, aiming to minimize computational costs while maintaining service quality. This dual approach is innovative in its use of RL to address both selection and allocation challenges within IDaS frameworks. The effectiveness of TD3 is compared against Simulated Annealing (SA), a traditional optimization technique. The results demonstrate that utilizing DQN to dynamically select the model significantly improves the reward by 0.29% to 27.04%, effectively balancing detection performance (F1 score), detection time, and computation cost. Regarding capacity allocation, TD3 accelerates decision times approximately 5×106 times faster than SA while retaining decision quality within a 10% range comparable to SA's performance.
AB - Intrusion Detection Systems (IDS) increasingly leverage machine learning (ML) to enhance the detection of zero-day attacks. As operational complexities increase, enterprises are turning to Intrusion Detection as a Service (IDaS), requiring advanced solutions for efficient ML model selection and resource allocation. Existing research often focuses primarily on accuracy and computational efficiency, leaving a gap in solutions that can dynamically adapt. This study introduces a novel integrated solution, Auto-IDaS, which employs advanced Reinforcement Learning (RL) techniques for real-time, adaptive management of IDS. Auto-IDaS uses the Deep Q-Network (DQN) algorithm for dynamic ML model selection, automatically adjusting configurations of IDaS in response to fluctuating network traffic conditions. Simultaneously, it utilizes the Twin Delayed Deep Deterministic (TD3) algorithm for optimizing capacity allocation, aiming to minimize computational costs while maintaining service quality. This dual approach is innovative in its use of RL to address both selection and allocation challenges within IDaS frameworks. The effectiveness of TD3 is compared against Simulated Annealing (SA), a traditional optimization technique. The results demonstrate that utilizing DQN to dynamically select the model significantly improves the reward by 0.29% to 27.04%, effectively balancing detection performance (F1 score), detection time, and computation cost. Regarding capacity allocation, TD3 accelerates decision times approximately 5×106 times faster than SA while retaining decision quality within a 10% range comparable to SA's performance.
KW - Auto-IDaS
KW - Auto-configuration
KW - Capacity allocation optimization
KW - Dynamic model selection
KW - ML-based IDaS
UR - http://www.scopus.com/inward/record.url?scp=85197942190&partnerID=8YFLogxK
U2 - 10.1016/j.jnca.2024.103936
DO - 10.1016/j.jnca.2024.103936
M3 - Article
AN - SCOPUS:85197942190
SN - 1084-8045
VL - 229
JO - Journal of Network and Computer Applications
JF - Journal of Network and Computer Applications
M1 - 103936
ER -