|国家科技期刊平台
首页|期刊导航|计算机科学与探索|图神经网络对抗攻击与鲁棒性评测前沿进展

图神经网络对抗攻击与鲁棒性评测前沿进展OA北大核心CSTPCD

Advances of Adversarial Attacks and Robustness Evaluation for Graph Neural Networks

中文摘要英文摘要

近年来,图神经网络(GNNs)逐渐成为人工智能的重要研究方向.然而,GNNs的对抗脆弱性使其实际应用面临严峻挑战.为了全面认识GNNs对抗攻击与鲁棒性评测的研究工作,对相关前沿进展进行梳理和分析讨论.介绍GNNs对抗攻击的研究背景,给出GNNs对抗攻击的形式化定义,阐述GNNs对抗攻击及鲁棒性评测的研究框架和基本概念.对GNNs对抗攻击领域所提具体方法进行了总结和梳理,并对其中的前沿方法从对抗攻击类型和攻击目标范围的角度进行详细分类阐述,分析了它们的工作机制、原理和优缺点.考虑到基于对抗攻击的模型鲁棒性评测依赖于对抗攻击方法的选择和对抗扰动程度,只能实现间接、局部的评价,难以全面反映模型鲁棒性的本质特征,从而着重对模型鲁棒性的直接评测指标进行了梳理和分析.在此基础上,为了支撑GNNs对抗攻击方法和鲁棒性模型的设计与评价,通过实验从易实现程度、准确性、执行时间等方面对代表性的GNNs对抗攻击方法进行了对比分析.对存在的挑战和未来研究方向进行展望.总体而言,目前GNNs对抗鲁棒性研究以反复实验为主,缺乏具有指导性的理论框架.如何保障基于GNNs的深度智能系统的可信性,仍需进一步系统性的基础理论研究.

In recent years,graph neural networks(GNNs)have gradually become an important research direction in artificial intelligence.However,the adversarial vulnerability of GNNs poses severe challenges to their practical ap-plications.To gain a comprehensive understanding of adversarial attacks and robustness evaluation on GNNs,related state-of-the-art advancements are reviewed and discussed.Firstly,this paper introduces the research background of adversarial attacks on GNNs,provides a formal definition of these attacks,and elucidates the basic concepts and framework for research on adversarial attacks and robustness evaluation in GNNs.Following this,this paper gives an overview of the specific methods proposed in the field of adversarial attacks on GNNs,and details the foremost methods while categorizing them based on the type of adversarial attack and range of attack targets.Their operating mechanisms,principles,and pros and cons are also analyzed.Additionally,considering the model robustness evalua-tion's dependency on adversarial attack methods and adversarial perturbation degree,this paper focuses on direct evaluation indicators.To aid in designing and evaluating adversarial attack methods and GNNs'robust models,this paper compares representative methods considering implementation ease,accuracy,and execution time.This paper foresees ongoing challenges and future research areas.Current research on GNNs'adversarial robustness is experiment-oriented,lacking a guiding theoretical framework,necessitating further systematic theoretical research to ensure GNN-based systems'trustworthiness.

吴涛;曹新汶;先兴平;袁霖;张殊;崔灿一星;田侃

重庆邮电大学 网络空间安全与信息法学院,重庆 400065||重庆市网络与信息安全技术工程实验室,重庆 400065||重庆邮电大学-重庆中国三峡博物馆智慧文博联合实验室,重庆 400065重庆邮电大学 网络空间安全与信息法学院,重庆 400065||重庆市网络与信息安全技术工程实验室,重庆 400065重庆邮电大学-重庆中国三峡博物馆智慧文博联合实验室,重庆 400065

计算机与自动化

图神经网络对抗脆弱性对抗攻击鲁棒性评测

graph neural networkadversarial vulnerabilityadversarial attacksrobustness evaluation

《计算机科学与探索》 2024 (008)

1935-1959 / 25

国家自然科学基金(62376047,62106030);重庆市自然科学基金创新发展联合基金重点项目(CSTB2023NSCQ-LZX0003,CSTB2023NSCQ-LMX0023);重庆市教委科学技术研究计划重点项目(KJZD-K202300603);重庆市技术创新与应用发展面上项目(CSTB2022TIAD-GPX0014). This work was supported by the National Natural Science Foundation of China(62376047,62106030),the Key Projects of Chongqing Natural Science Foundation Innovation and Development Joint Fund(CSTB2023NSCQ-LZX0003,CSTB2023NSCQ-LMX0023),the Key Project of Science and Technology Research Program of Chongqing Education Commission(KJZD-K202300603),and the Project of Chongqing Technological Innovation and Application Development Project(CSTB2022TIAD-GPX0014).

10.3778/j.issn.1673-9418.2311117

评论