RETHINKING BACKDOOR ATTACKS ON DATASET DISTILLATION: A KERNEL METHOD PERSPECTIVE

Ming Yu Chung, Sheng Yen Chou, Chia Mu Yu, Pin Yu Chen, Sy Yen Kuo, Tsung Yi Ho

Research output: Contribution to conferencePaperpeer-review

Abstract

Dataset distillation offers a potential means to enhance data efficiency in deep learning. Recent studies have shown its ability to counteract backdoor risks present in original training samples. In this study, we delve into the theoretical aspects of backdoor attacks and dataset distillation based on kernel methods. We introduce two new theory-driven trigger pattern generation methods specialized for dataset distillation. Following a comprehensive set of analyses and experiments, we show that our optimization-based trigger design framework informs effective backdoor attacks on dataset distillation. Notably, datasets poisoned by our designed trigger prove resilient against conventional backdoor attack detection and mitigation methods. Our empirical results validate that the triggers developed using our approaches are proficient at executing resilient backdoor attacks.

Original languageEnglish
StatePublished - 2024
Event12th International Conference on Learning Representations, ICLR 2024 - Hybrid, Vienna, Austria
Duration: 7 May 202411 May 2024

Conference

Conference12th International Conference on Learning Representations, ICLR 2024
Country/TerritoryAustria
CityHybrid, Vienna
Period7/05/2411/05/24

Fingerprint

Dive into the research topics of 'RETHINKING BACKDOOR ATTACKS ON DATASET DISTILLATION: A KERNEL METHOD PERSPECTIVE'. Together they form a unique fingerprint.

Cite this