RETHINKING BACKDOOR ATTACKS ON DATASET DISTILLATION: A KERNEL METHOD PERSPECTIVE

Ming Yu Chung, Sheng Yen Chou, Chia Mu Yu, Pin Yu Chen, Sy Yen Kuo, Tsung Yi Ho

研究成果同行評審

摘要

Dataset distillation offers a potential means to enhance data efficiency in deep learning. Recent studies have shown its ability to counteract backdoor risks present in original training samples. In this study, we delve into the theoretical aspects of backdoor attacks and dataset distillation based on kernel methods. We introduce two new theory-driven trigger pattern generation methods specialized for dataset distillation. Following a comprehensive set of analyses and experiments, we show that our optimization-based trigger design framework informs effective backdoor attacks on dataset distillation. Notably, datasets poisoned by our designed trigger prove resilient against conventional backdoor attack detection and mitigation methods. Our empirical results validate that the triggers developed using our approaches are proficient at executing resilient backdoor attacks.

原文English
出版狀態Published - 2024
事件12th International Conference on Learning Representations, ICLR 2024 - Hybrid, Vienna, 奧地利
持續時間: 7 5月 202411 5月 2024

Conference

Conference12th International Conference on Learning Representations, ICLR 2024
國家/地區奧地利
城市Hybrid, Vienna
期間7/05/2411/05/24

指紋

深入研究「RETHINKING BACKDOOR ATTACKS ON DATASET DISTILLATION: A KERNEL METHOD PERSPECTIVE」主題。共同形成了獨特的指紋。

引用此