TY - JOUR
T1 - Network Robustness Prediction
T2 - Influence of Training Data Distributions
AU - Lou, Yang
AU - Wu, Chengpei
AU - Li, Junli
AU - Wang, Lin
AU - Chen, Guanrong
N1 - Publisher Copyright:
© 2012 IEEE.
PY - 2024
Y1 - 2024
N2 - Network robustness refers to the ability of a network to continue its functioning against malicious attacks, which is critical for various natural and industrial networks. Network robustness can be quantitatively measured by a sequence of values that record the remaining functionality after a sequential node- or edge-removal attacks. Robustness evaluations are traditionally determined by attack simulations, which are computationally very time-consuming and sometimes practically infeasible. The convolutional neural network (CNN)-based prediction provides a cost-efficient approach to fast evaluating the network robustness. In this article, the prediction performances of the learning feature representation-based CNN (LFR-CNN) and PATCHY-SAN methods are compared through extensively empirical experiments. Specifically, three distributions of network size in the training data are investigated, including the uniform, Gaussian, and extra distributions. The relationship between the CNN input size and the dimension of the evaluated network is studied. Extensive experimental results reveal that compared to the training data of uniform distribution, the Gaussian and extra distributions can significantly improve both the prediction performance and the generalizability, for both LFR-CNN and PATCHY-SAN, and for various functionality robustness. The extension ability of LFR-CNN is significantly better than PATCHY-SAN, verified by extensive comparisons on predicting the robustness of unseen networks. In general, LFR-CNN outperforms PATCHY-SAN, and thus LFR-CNN is recommended over PATCHY-SAN. However, since both LFR-CNN and PATCHY-SAN have advantages for different scenarios, the optimal settings of the input size of CNN are recommended under different configurations.
AB - Network robustness refers to the ability of a network to continue its functioning against malicious attacks, which is critical for various natural and industrial networks. Network robustness can be quantitatively measured by a sequence of values that record the remaining functionality after a sequential node- or edge-removal attacks. Robustness evaluations are traditionally determined by attack simulations, which are computationally very time-consuming and sometimes practically infeasible. The convolutional neural network (CNN)-based prediction provides a cost-efficient approach to fast evaluating the network robustness. In this article, the prediction performances of the learning feature representation-based CNN (LFR-CNN) and PATCHY-SAN methods are compared through extensively empirical experiments. Specifically, three distributions of network size in the training data are investigated, including the uniform, Gaussian, and extra distributions. The relationship between the CNN input size and the dimension of the evaluated network is studied. Extensive experimental results reveal that compared to the training data of uniform distribution, the Gaussian and extra distributions can significantly improve both the prediction performance and the generalizability, for both LFR-CNN and PATCHY-SAN, and for various functionality robustness. The extension ability of LFR-CNN is significantly better than PATCHY-SAN, verified by extensive comparisons on predicting the robustness of unseen networks. In general, LFR-CNN outperforms PATCHY-SAN, and thus LFR-CNN is recommended over PATCHY-SAN. However, since both LFR-CNN and PATCHY-SAN have advantages for different scenarios, the optimal settings of the input size of CNN are recommended under different configurations.
KW - Complex network
KW - convolutional neural network (CNN)
KW - learning feature representation (LFR)
KW - prediction
KW - robustness
UR - http://www.scopus.com/inward/record.url?scp=85161076949&partnerID=8YFLogxK
U2 - 10.1109/TNNLS.2023.3269753
DO - 10.1109/TNNLS.2023.3269753
M3 - Article
C2 - 37220060
AN - SCOPUS:85161076949
SN - 2162-237X
VL - 35
SP - 13496
EP - 13507
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
IS - 10
ER -