电子学报2024,Vol.52Issue(3):863-871,9.DOI:10.12263/DZXB.20221301
针对图像分类的鲁棒物理域对抗伪装
Robust Physical Adversarial Camouflages for Image Classifiers
摘要
Abstract
Deep learning models are vulnerable to adversarial examples.As a more threatening type for practical deep learning systems,physical adversarial examples have received extensive research attention in recent years.Most of the exist-ing methods use the local adversarial patch noise to attack the image classification model in the physical world.However,the attack effect of 2D patches in 3D space would inevitably decline due to the change in the view angle.To address this is-sue,the proposed Adv-Camou method uses spatial combination transformation to generate training examples of arbitrary viewpoints and transformed backgrounds in real time.Moreover,the cross-entropy loss between the prediction class and tar-get class is minimized to make the model output the specified incorrect class.In addition,the established 3D scene can eval-uate different attacks fairly and reproducibly.The experimental results show that the coated adversarial camouflage generat-ed by the Adv-Camou method can fool image classifiers from arbitrary viewpoints.In the 3D simulation scene,the average targeted attack success rate of Adv-Camou is more than 25%higher than that of piecing together patches.The success rate of black-box targeted attacks on the Clarifai commercial classification system reaches 42%.In addition,the average attack success rate of 3D printing model experiments in the real world is about 66%,which significantly demonstrates that our method outperforms state-of-the-art methods.关键词
对抗样本/对抗伪装/对抗攻击/图像分类/深度神经网络Key words
adversarial example/adversarial camouflage/adversarial attack/image classification/deep neural net-work分类
信息技术与安全科学引用本文复制引用
段晔鑫,贺正芸,张颂,詹达之,王田丰,林庚右,张锦,潘志松..针对图像分类的鲁棒物理域对抗伪装[J].电子学报,2024,52(3):863-871,9.基金项目
国家自然科学基金(No.62076251) National Natural Science Foundation of China(No.62076251) (No.62076251)