|国家科技期刊平台
首页|期刊导航|信息工程大学学报|Neighbor2Neighbor去噪的对抗样本防御方法

Neighbor2Neighbor去噪的对抗样本防御方法OA

Adversarial Examples Defense Method Based on Neighbor2Neighbor Denoising

中文摘要英文摘要

在图像分类任务中,对抗样本可导致深度学习模型以高置信度输出错误的结果,而目前防御对抗样本的主要方法——改进分类模型的成本较高或难以防御新的攻击算法.为解决上述问题,提出一种新的基于图像去噪的对抗样本防御方法.通过向输入样本中添加高斯噪声来破坏攻击者精心设计的对抗扰动,利用Neighbor2Neighbor去噪网络来减少该样本中的噪声.实验结果表明,在ImageNet数据集上,所提方法能够对基本迭代法(Basic Iterative Method,BIM)、C&W(Carlini and Wagner)攻击和DeepFool等经典攻击进行有效防御,且其防御效果优于Com-Defend和JPEG压缩.

In image classification tasks,adversarial examples can fool the deep learning models with high confidence.At present,improving the classification model is the main method of defending adver-sarial examples,however,which is expensive or difficult to defend against new adversarial attack algo-rithms.To solve the above problems,a defense method against adversarial attacks is proposed based on image denoising.By adding Gaussian noise to the input examples,the adversarial perturbations elabo-rately designed by the attackers can be destroyed.Then the noise in the input examples could be de-creased by using the Neighbor2Neighbor denoising network.The experiment results show that,on Ima-geNet dataset,the proposed method can effectively defend against the classical adversarial attacks such as basic itrative method(BIM),C&W(Carlini and Wagner)attacks,and DeepFool.And its de-fense effect is better than those of ComDefend and JPEG compression.

王飞宇;张帆;郭威

信息工程大学,河南 郑州 450001

计算机与自动化

深度学习对抗样本对抗样本防御图像去噪

deep learningadversarial examplesadversarial examples defenseimage denoising

《信息工程大学学报》 2024 (004)

466-471 / 6

10.3969/j.issn.1671-0673.2024.04.015

评论