Neighbor2Neighbor去噪的对抗样本防御方法OA
Adversarial Examples Defense Method Based on Neighbor2Neighbor Denoising
在图像分类任务中,对抗样本可导致深度学习模型以高置信度输出错误的结果,而目前防御对抗样本的主要方法——改进分类模型的成本较高或难以防御新的攻击算法.为解决上述问题,提出一种新的基于图像去噪的对抗样本防御方法.通过向输入样本中添加高斯噪声来破坏攻击者精心设计的对抗扰动,利用Neighbor2Neighbor去噪网络来减少该样本中的噪声.实验结果表明,在ImageNet数据集上,所提方法能够对基本迭代法(Basic Iterative Method,…查看全部>>
In image classification tasks,adversarial examples can fool the deep learning models with high confidence.At present,improving the classification model is the main method of defending adver-sarial examples,however,which is expensive or difficult to defend against new adversarial attack algo-rithms.To solve the above problems,a defense method against adversarial attacks is proposed based on image denoising.By adding Gaussian noise to the input examples,the ad…查看全部>>
王飞宇;张帆;郭威
信息工程大学,河南 郑州 450001信息工程大学,河南 郑州 450001信息工程大学,河南 郑州 450001
计算机与自动化
深度学习对抗样本对抗样本防御图像去噪
deep learningadversarial examplesadversarial examples defenseimage denoising
《信息工程大学学报》 2024 (4)
466-471,6
评论