TY - GEN
T1 - Securing Deep Neural Networks on Edge from Membership Inference Attacks Using Trusted Execution Environments
AU - Yang, Cheng Yun
AU - Ramshankar, Gowri
AU - Eliopoulos, Nicholas
AU - Jajal, Purvish
AU - Nambiar, Sudarshan
AU - Miller, Evan
AU - Zhang, Xun
AU - Tian, Dave Jing
AU - Chen, Shuo Han
AU - Perng, Chiy Ferng
AU - Lu, Yung Hsiang
N1 - Publisher Copyright:
© 2024 Copyright is held by the owner/author(s). Publication rights licensed to ACM.
PY - 2024/8/5
Y1 - 2024/8/5
N2 - Privacy concerns arise from malicious attacks on Deep Neural Network (DNN) applications during sensitive data inference on edge devices. Membership Inference Attack (MIA) is developed by adversaries to determine whether sensitive data is used to train the DNN applications. Prior work uses Trusted Execution Environments (TEEs) to hide DNN model inference from adversaries on edge devices. Unfortunately, existing methods have two major problems. First, due to the restricted memory of TEEs, prior work cannot secure large-size DNNs from gradient-based MIAs. Second, prior work is ineffective on output-based MIAs. To mitigate the problems, we present a depth-wise layer partitioning method to run large sensitive layers inside TEEs. We further propose a model quantization strategy to improve the defense capability of DNNs against output-based MIAs and accelerate the computation. We also automate the process of securing PyTorch-based DNN models inside TEEs. Experiments on Raspberry Pi 3B+ show that our method can reduce the accuracy of gradient-based MIAs on AlexNet, VGG-16, and ResNet-20 evaluated on the CIFAR-100 dataset by 28.8%, 11%, and 35.3%. The accuracy of output-based MIAs on the three models is also reduced by 18.5%, 13.4%, and 29.6%, respectively.
AB - Privacy concerns arise from malicious attacks on Deep Neural Network (DNN) applications during sensitive data inference on edge devices. Membership Inference Attack (MIA) is developed by adversaries to determine whether sensitive data is used to train the DNN applications. Prior work uses Trusted Execution Environments (TEEs) to hide DNN model inference from adversaries on edge devices. Unfortunately, existing methods have two major problems. First, due to the restricted memory of TEEs, prior work cannot secure large-size DNNs from gradient-based MIAs. Second, prior work is ineffective on output-based MIAs. To mitigate the problems, we present a depth-wise layer partitioning method to run large sensitive layers inside TEEs. We further propose a model quantization strategy to improve the defense capability of DNNs against output-based MIAs and accelerate the computation. We also automate the process of securing PyTorch-based DNN models inside TEEs. Experiments on Raspberry Pi 3B+ show that our method can reduce the accuracy of gradient-based MIAs on AlexNet, VGG-16, and ResNet-20 evaluated on the CIFAR-100 dataset by 28.8%, 11%, and 35.3%. The accuracy of output-based MIAs on the three models is also reduced by 18.5%, 13.4%, and 29.6%, respectively.
KW - ARM TrustZone
KW - membership inference attack
KW - model partitioning
KW - model quantization
KW - trusted execution environment
UR - http://www.scopus.com/inward/record.url?scp=85204999348&partnerID=8YFLogxK
U2 - 10.1145/3665314.3670821
DO - 10.1145/3665314.3670821
M3 - Conference contribution
AN - SCOPUS:85204999348
T3 - Proceedings of the 29th International Symposium on Low Power Electronics and Design, ISLPED 2024
BT - Proceedings of the 29th International Symposium on Low Power Electronics and Design, ISLPED 2024
PB - Association for Computing Machinery, Inc
T2 - 29th ACM/IEEE International Symposium on Low Power Electronics and Design, ISLPED 2024
Y2 - 5 August 2024 through 7 August 2024
ER -