Pseudo datasets explain artificial neural networks

Yi Chi Chu, Yi Hau Chen, Chao Yu Guo*

*此作品的通信作者

研究成果: Article同行評審

摘要

Machine learning enhances predictive ability in various research compared to conventional statistical approaches. However, the advantage of the regression model is that it can effortlessly interpret the effect of each predictor. Therefore, interpretable machine-learning models are desirable as the deep-learning technique advances. Although many studies have proposed ways to explain neural networks, this research suggests an intuitive and feasible algorithm to interpret any set of input features of artificial neural networks at the population-mean level changes. The new algorithm provides a novel concept of generating pseudo datasets and evaluating the impact due to changes in the input features. Our approach can accurately obtain the effect estimate from single to multiple input neurons and depict the association between the predictive and outcome variables. According to computer simulation studies, the explanatory effects of the predictors derived by the neural network as a particular case could approximate the general linear model estimates. Besides, we applied the new method to three real-life analyzes. The results demonstrated that the new algorithm could obtain similar effect estimates from the neural networks and regression models. Besides, it yields better predictive errors than the conventional regression models. Again, it is worth noting that the new pipeline is much less computationally intensive than the SHapley Additive exPlanations (SHAP), which could not simultaneously measure the impact due to two or more inputs while adjusting for other features.

指紋

深入研究「Pseudo datasets explain artificial neural networks」主題。共同形成了獨特的指紋。

引用此