网络与信息安全学报2025,Vol.11Issue(2):136-151,16.DOI:10.11959/j.issn.2096-109x.2025025
自监督学习驱动的注意力增强恶意流量检测方法
Harnessing self-supervised learning to boost malicious traffic detection with enhanced attention
摘要
Abstract
The existing deep learning-based malicious traffic detection methods generally suffered from three main problems:labeled sample scarcity,inadequate representation of malicious behavior traffic features,and a high false positive rate due to ineffective integration of behavioral association patterns during detection.To address these is-sues,an end-to-end malicious traffic detection method named MTAttention was proposed.The heterogeneous header features and payloads of network behavior traffic were unified and encoded,resulting in a standardized flow and session representation at the packet level through structured multi-packet sequential traffic representation.Based on the MAE model,a self-supervised mask pre-training strategy was adopted,and a visual Transformer was utilized to extract rich traffic feature representations.By selectively focusing on the spatial and intervariable depen-dencies among different parts of the input packet sequence,a general traffic representation was learned,and the weight parameters of the encoder model were used to initialize subsequent tasks,which accelerated the model train-ing process.A packet sequence feature fusion strategy based on channel attention was introduced,and a multi-dimensional attention mechanism,along with labeled data,were used to fine-tune the model's weights to adapt to traffic identification and classification tasks.Before making classification decisions,the integration ability of high-weight features was enhanced to further improve the model's detection precision.The CIC-IDS2017 dataset was employed for the experiment,and the results showed that in the multi-classification scenario for malicious traffic identification,the precision of the MTAttention method could reach an average of 98.7%,and the inference effi-ciency exceeded 1590 samples per second.Compared with Flow-MAE,an improvement over the MAE paradigm,MTAttention maintained high precision while requiring only 1.56%of the parameter count and 63.89%of the memory overhead of Flow-MAE.Additionally,the average inference speed enhancement was approximately 100%,and the model size was only 5.17 MB.关键词
流量表示/结构化/自监督学习/注意力增强/流量检测Key words
traffic representation/structuring/self-supervised learning/attention enhancement/traffic detection分类
信息技术与安全科学引用本文复制引用
孙剑文,张斌,李红宇,常禾雨..自监督学习驱动的注意力增强恶意流量检测方法[J].网络与信息安全学报,2025,11(2):136-151,16.基金项目
信息工程大学密码工程学院研究生创新基金(2019f113) Innovation Fund for Graduate Students of the Cryptography Engineering Institute,Information Engineering University(2019f113) (2019f113)