| 注册
首页|期刊导航|电子学报|针对图像分类的鲁棒物理域对抗伪装

针对图像分类的鲁棒物理域对抗伪装

段晔鑫 贺正芸 张颂 詹达之 王田丰 林庚右 张锦 潘志松

电子学报2024,Vol.52Issue(3):863-871,9.
电子学报2024,Vol.52Issue(3):863-871,9.DOI:10.12263/DZXB.20221301

针对图像分类的鲁棒物理域对抗伪装

Robust Physical Adversarial Camouflages for Image Classifiers

段晔鑫 1贺正芸 2张颂 3詹达之 4王田丰 4林庚右 4张锦 5潘志松4

作者信息

  • 1. 陆军军事交通学院镇江校区,江苏镇江 212003||陆军工程大学指挥控制工程学院,江苏南京 210007
  • 2. 湖南工业大学轨道交通学院,湖南株洲 412007
  • 3. 北京电子科技学院网络空间安全系,北京 100071
  • 4. 陆军工程大学指挥控制工程学院,江苏南京 210007
  • 5. 陆军军事交通学院镇江校区,江苏镇江 212003
  • 折叠

摘要

Abstract

Deep learning models are vulnerable to adversarial examples.As a more threatening type for practical deep learning systems,physical adversarial examples have received extensive research attention in recent years.Most of the exist-ing methods use the local adversarial patch noise to attack the image classification model in the physical world.However,the attack effect of 2D patches in 3D space would inevitably decline due to the change in the view angle.To address this is-sue,the proposed Adv-Camou method uses spatial combination transformation to generate training examples of arbitrary viewpoints and transformed backgrounds in real time.Moreover,the cross-entropy loss between the prediction class and tar-get class is minimized to make the model output the specified incorrect class.In addition,the established 3D scene can eval-uate different attacks fairly and reproducibly.The experimental results show that the coated adversarial camouflage generat-ed by the Adv-Camou method can fool image classifiers from arbitrary viewpoints.In the 3D simulation scene,the average targeted attack success rate of Adv-Camou is more than 25%higher than that of piecing together patches.The success rate of black-box targeted attacks on the Clarifai commercial classification system reaches 42%.In addition,the average attack success rate of 3D printing model experiments in the real world is about 66%,which significantly demonstrates that our method outperforms state-of-the-art methods.

关键词

对抗样本/对抗伪装/对抗攻击/图像分类/深度神经网络

Key words

adversarial example/adversarial camouflage/adversarial attack/image classification/deep neural net-work

分类

信息技术与安全科学

引用本文复制引用

段晔鑫,贺正芸,张颂,詹达之,王田丰,林庚右,张锦,潘志松..针对图像分类的鲁棒物理域对抗伪装[J].电子学报,2024,52(3):863-871,9.

基金项目

国家自然科学基金(No.62076251) National Natural Science Foundation of China(No.62076251) (No.62076251)

电子学报

OA北大核心CSTPCD

0372-2112

访问量0
|
下载量0
段落导航相关论文