信息安全研究2025,Vol.11Issue(3):205-213,9.DOI:10.12379/j.issn.2096-1057.2025.03.02
一种抗标签翻转攻击的联邦学习方法
A Federated Learning Method Resistant to Label Flip Attack
摘要
Abstract
Since users participating in federated learning training have high autonomy and their identities are difficult to identify,they are vulnerable to label flip attacks,causing the model to learn wrong rules from wrong labels and reducing the overall performance of the model.In order to effectively resist label flip attacks,a dilution-protected federated learning method for multi-stage training models is proposed.This method randomly divides the training data set and uses a dilution protection federated learning algorithm to distribute part of the data to clients participating in the training to limit the amount of data owned by the client and avoid malicious participants with large amounts of data from causing major damage to the model.After each training session,the gradients of all training epochs in that phase are gradient clustered by a dimensionality reduction algorithm in order to identify potentially malicious actors and restrict their training in the next phase.At the same time,the global model parameters are saved after each stage of training to ensure that the training of each stage is based on the model foundation of the previous stage.Experimental results on the data set show that this method reduces the impact of attacks without damaging the model accuracy,and helps improve the convergence speed of the model.关键词
联邦学习/数据安全/恶意行为/标签翻转攻击/防御Key words
federated learning/data security/malicious behavior/label flip attack/defense分类
计算机与自动化引用本文复制引用
周景贤,韩威,张德栋,李志平..一种抗标签翻转攻击的联邦学习方法[J].信息安全研究,2025,11(3):205-213,9.基金项目
国家自然科学基金项目(U2333201) (U2333201)
民航安全能力建设项目(PESA2022093,PESA2023101) (PESA2022093,PESA2023101)
中央高校基本科研业务费资金项目(3122022058) (3122022058)
中国高校产学研创新基金项目(2023IT277) (2023IT277)