TY - GEN
T1 - Mitigate the Negative TL using Adaptive Thresholding for Fault Diagnosis
AU - Mp, Pavan Kumar
AU - Chen, Kun Chih Jimmy
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - The fourth industrial revolution has created a data-centric ecosystem where the implementation of Prognostics and Health Management (PHM) technology is crucial to support contemporary industrial systems. To enhance performance in fault diagnosis and health assessment of mechanical equipment, Deep Learning (DL) has been integrated into PHM. However, DL models encounter several challenges in PHM, such as the requirement for large amounts of labeled data and a lack of generalizability. TL (TL) has emerged as a promising technique to overcome these limitations. Fine-Tuning, a commonly used approach to the inductive transfer of deep models, assumes that the source and target tasks are related and that pre-Trained parameters from the source task are likely to be close to the optimal parameters for the target task. Nevertheless, when the amount of training data on the target domain is limited, fine-Tuning can lead to negative transfer and catastrophic forgetting. To overcome these issues, we propose a novel regularization approach to selectively modulate the features of normalized inputs based on their distance from the mini-batch mean during fine-Tuning. Our approach aims to prevent the negative transfer of pre-Trained knowledge that is irrelevant to the target task and mitigate catastrophic forgetting. Furthermore, our approach yields a 0.9-5% increase in accuracy under the same environment and 2.8-6.2% under different environmental conditions, compared to other state-of-The-Art regularization-based methods.
AB - The fourth industrial revolution has created a data-centric ecosystem where the implementation of Prognostics and Health Management (PHM) technology is crucial to support contemporary industrial systems. To enhance performance in fault diagnosis and health assessment of mechanical equipment, Deep Learning (DL) has been integrated into PHM. However, DL models encounter several challenges in PHM, such as the requirement for large amounts of labeled data and a lack of generalizability. TL (TL) has emerged as a promising technique to overcome these limitations. Fine-Tuning, a commonly used approach to the inductive transfer of deep models, assumes that the source and target tasks are related and that pre-Trained parameters from the source task are likely to be close to the optimal parameters for the target task. Nevertheless, when the amount of training data on the target domain is limited, fine-Tuning can lead to negative transfer and catastrophic forgetting. To overcome these issues, we propose a novel regularization approach to selectively modulate the features of normalized inputs based on their distance from the mini-batch mean during fine-Tuning. Our approach aims to prevent the negative transfer of pre-Trained knowledge that is irrelevant to the target task and mitigate catastrophic forgetting. Furthermore, our approach yields a 0.9-5% increase in accuracy under the same environment and 2.8-6.2% under different environmental conditions, compared to other state-of-The-Art regularization-based methods.
UR - http://www.scopus.com/inward/record.url?scp=85167866619&partnerID=8YFLogxK
U2 - 10.1109/COINS57856.2023.10189313
DO - 10.1109/COINS57856.2023.10189313
M3 - Conference contribution
AN - SCOPUS:85167866619
T3 - 2023 IEEE International Conference on Omni-Layer Intelligent Systems, COINS 2023
BT - 2023 IEEE International Conference on Omni-Layer Intelligent Systems, COINS 2023
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2023 IEEE International Conference on Omni-Layer Intelligent Systems, COINS 2023
Y2 - 23 July 2023 through 25 July 2023
ER -