TY - JOUR
T1 - TMHSCA
T2 - a novel hybrid two-stage mutation with a sine cosine algorithm for discounted {0-1} knapsack problems
AU - Kang, Yan
AU - Wang, Haining
AU - Pu, Bin
AU - Liu, Jiansong
AU - Lee, Shin Jye
AU - Yang, Xuekun
AU - Tao, Liu
N1 - Publisher Copyright:
© 2023, The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature.
PY - 2023/6
Y1 - 2023/6
N2 - The discounted {0-1} knapsack problem (DKP) is an NP-hard problem that is more challenging than the classical knapsack problem. In this paper, an enhanced version of the sine-cosine algorithm (SCA) called TMHSCA is proposed to solve the DKP. The SCA is a novel metaheuristic algorithm based on sine-cosine theory. Three effective improvements are proposed for the SCA to enhance the convergence speed and exploitation capability of the algorithm. First, by dividing the initial population into three subpopulations using random, greedy and opposition-based learning (OBL) strategies, the diversity of the population can be effectively enhanced. Furthermore, a forward-inverse sine-cosine search that expands the search space by utilizing the worst solution as another target is proposed. The proposed forward-inverse sine-cosine search strategy facilitates the algorithm to explore the easily ignored solution space. The last improvement includes the use of an OBL mutation and the mutation operator of differential evolution at a two-stage mutation, which increases the algorithm’s convergence speed and fitness. In our proposed two-stage mutation, the number of mutations can be adaptively adjusted based on the optimization capabilities of the optimal individual. Additionally, a reshuffling repair strategy for repairing infeasible solutions by sorting items into groups to obtain the weight-to-value ratio is proposed. Extensive experiments are conducted on 40 publicly available datasets and are compared to the greedy algorithm, 3 SCAs and 8 competitive baseline methods. The experimental results demonstrate that our method achieves a state-of-the-art performance.
AB - The discounted {0-1} knapsack problem (DKP) is an NP-hard problem that is more challenging than the classical knapsack problem. In this paper, an enhanced version of the sine-cosine algorithm (SCA) called TMHSCA is proposed to solve the DKP. The SCA is a novel metaheuristic algorithm based on sine-cosine theory. Three effective improvements are proposed for the SCA to enhance the convergence speed and exploitation capability of the algorithm. First, by dividing the initial population into three subpopulations using random, greedy and opposition-based learning (OBL) strategies, the diversity of the population can be effectively enhanced. Furthermore, a forward-inverse sine-cosine search that expands the search space by utilizing the worst solution as another target is proposed. The proposed forward-inverse sine-cosine search strategy facilitates the algorithm to explore the easily ignored solution space. The last improvement includes the use of an OBL mutation and the mutation operator of differential evolution at a two-stage mutation, which increases the algorithm’s convergence speed and fitness. In our proposed two-stage mutation, the number of mutations can be adaptively adjusted based on the optimization capabilities of the optimal individual. Additionally, a reshuffling repair strategy for repairing infeasible solutions by sorting items into groups to obtain the weight-to-value ratio is proposed. Extensive experiments are conducted on 40 publicly available datasets and are compared to the greedy algorithm, 3 SCAs and 8 competitive baseline methods. The experimental results demonstrate that our method achieves a state-of-the-art performance.
KW - Combinatorial optimization
KW - Discounted {0-1} knapsack problem
KW - Opposition-based learning theory
KW - Sine-cosine algorithm
UR - http://www.scopus.com/inward/record.url?scp=85149375571&partnerID=8YFLogxK
U2 - 10.1007/s00521-023-08367-6
DO - 10.1007/s00521-023-08367-6
M3 - Article
AN - SCOPUS:85149375571
SN - 0941-0643
VL - 35
SP - 12691
EP - 12713
JO - Neural Computing and Applications
JF - Neural Computing and Applications
IS - 17
ER -