Improving the Transferability of Adversarial Examples with Diverse Gradients

Yangjie Cao, Haobo Wang, Chenxi Zhu, Yan Zhuang (Corresponding author), Jie Li, Xianfu Chen

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingsScientificpeer-review

Abstract

Previous works have proven the superior performance of ensemble-based black-box attacks on transferability. However, existing methods require significant difference in architecture among the source models to ensure gradient diversity. In this paper, we propose a Diverse Gradient Method (DGM), verifying that knowledge distillation is able to generate diverse gradients from unchangeable model architecture for boosting transferability. The core idea behind our DGM is to obtain transferable adversarial perturbations by fusing diverse gradients provided by a single source model and its distilled versions through an ensemble strategy. Experimental results show that DGM successfully crafts adversarial examples with higher transferability, only requiring extremely low training cost. Furthermore, our proposed method could be used as a flexible module to improve transferability of most of existing black-box attacks.
Original languageEnglish
Title of host publicationIJCNN 2023 - International Joint Conference on Neural Networks
Subtitle of host publicationProceedings
PublisherIEEE Institute of Electrical and Electronic Engineers
Number of pages9
ISBN (Electronic)978-1-6654-8867-9
ISBN (Print)978-1-6654-8868-6
DOIs
Publication statusPublished - 2 Aug 2023
MoE publication typeA4 Article in a conference publication
EventInternational Joint Conference on Neural Networks, IJCNN 2023 - Gold Coast, Australia
Duration: 18 Jun 202323 Jun 2023

Publication series

SeriesProceedings of the International Joint Conference on Neural Networks
Volume2023-June

Conference

ConferenceInternational Joint Conference on Neural Networks, IJCNN 2023
Country/TerritoryAustralia
CityGold Coast
Period18/06/2323/06/23

Keywords

  • Adversarial examples
  • Black-box attack
  • Gradient diversity
  • Transferability

Fingerprint

Dive into the research topics of 'Improving the Transferability of Adversarial Examples with Diverse Gradients'. Together they form a unique fingerprint.

Cite this