现代应用物理2025,Vol.16Issue(1):59-72,14.DOI:10.12061/j.issn.2095-6223.202412054
梯度攻击视角下神经网络的内禀不确定原理的解析与综述
A Pedagogical Explanation of the Inherent Uncertainty Principle of Neural Networks from the Perspective of Gradient-Based Attacks
摘要
Abstract
In recent years,deep neural networks(DNNs)have achieved remarkable success across various scientific domains,yet their vulnerability issues have become increasingly prominent.This paper systematically investigates the inherent uncertainty principle of neural networks from the perspective of gradient-based attacks,revealing the inherent contradiction between accuracy and robustness.First,the vulnerability of neural networks in scientific applications is analyzed,particularly in image classification,weather prediction,chemical computation,fluid dynamics,and quantum chromodynamics.By employing adversarial attack methods such as the fast gradient sign method(FGSM),the significant susceptibility of neural networks to minor perturbations is demonstrated,and the theoretical foundation of the robustness-accuracy paradox is further explored.Second,through analogy with the uncertainty principle in quantum mechanics,a theoretical framework is established to describe the robustness-accuracy paradox in neural networks,supported by numerical experiments validating its universality.Finally,the practical implications of the neural network uncertainty principle are emphasized and the critical importance of understanding and addressing AI vulnerabilities for future research relying on artificial intelligence is underscored.This study provides comprehensive perspectives for understanding neural network vulnerabilities and offers detailed guidance for designing robust neural architectures and ensuring AI safety.关键词
神经网络/内禀不确定原理/梯度攻击/鲁棒-精度悖论/科学研究Key words
neural networks/inherent uncertainty principle/gradient attacks/robustness-accuracy trade-off/scientific research分类
计算机与自动化引用本文复制引用
张俊杰,陈剑楠,孟德宇..梯度攻击视角下神经网络的内禀不确定原理的解析与综述[J].现代应用物理,2025,16(1):59-72,14.基金项目
国家自然科学基金资助项目(12405318) (12405318)