电讯技术2026,Vol.66Issue(2):247-258,12.DOI:10.20079/j.issn.1001-893x.241106003
分层联邦学习中通信高效层剪枝算法
Communication-efficient Layer-wise Pruning Algorithm for Hierarchical Federated Learning
摘要
Abstract
The cloud-edge-client hierarchical federated learning expands the scope of cloud data access and enhances model training effectiveness,but the large network size and number of devices increase communication burden.To solve this problem,a layer-wise pruning algorithm with fixed layer-preserving rate(LP-FLR-HFL)is proposed to perform model pruning before uploading model parameters,effectively compressing the model size and lowering system overhead.Building on this and taking into account the differences in model training between clients,a layer-wise pruning algorithm with adaptive layer-preserving rate(LP-ALR-HFL)is proposed.This algorithm can adjust the layer-preserving rate in real-time based on model accuracy,effectively mitigating the impact of non-independent and identically distributed data on pruning performance and improving model adaptation.The simulation results show that the LP-FLR-HFL algorithm reduces system latency by up to 56.06%and energy consumption by 48.88%when compared with the baseline method while maintaining controllable model accuracy,and the LP-ALR-HFL algorithm improves model accuracy by up to 4.71%while maintaining the latency and energy optimization advantages of LP-FLR-HFL.关键词
分层联邦学习/模型剪枝/层剪枝/通信效率Key words
hierarchical federated learning/model pruning/layer-wise pruning/communication efficiency分类
信息技术与安全科学引用本文复制引用
刘昊天,魏泽,何荣希..分层联邦学习中通信高效层剪枝算法[J].电讯技术,2026,66(2):247-258,12.基金项目
国家自然科学基金资助项目(62371085) (62371085)
中央高校基本科研业务费专项资金资助(3132023514) (3132023514)