天津科技大学学报2024,Vol.39Issue(3):56-63,8.DOI:10.13364/j.issn.1672-6510.20230145
基于探针稀疏注意力机制的门控Transformer模型
Gated Transformer Based on Prob-Sparse Attention
摘要
Abstract
In reinforcement learning,the agent encodes state sequence and influences action selection by historical informa-tion,typically employing recurrent neural network.Such traditional methods encounter gradient issues such as gradient dis-appearance and gradient explosion,and are also challenged by long sequences.Transformer leverages self-attention to as-similate long-range information.However,traditional Transformer exhibits instability and complexity in reinforcement learn-ing.Gated Transformer-XL(GTrXL)ameliorates Transformer training stability,but remains complex.To solve these prob-lems,in this article we propose a prob-sparse attention gated Transformer(PS-GTr)model,which introduces prob-sparse attention mechanism on the basis of identity mapping rearrangement and gating mechanism in GTrXL,reducing time and space complexity,and further improving training efficiency.Experimental verification showed that PS-GTr had comparable performance compared to GTrXL in reinforcement learning tasks,but had lower training time and memory usage.关键词
深度强化学习/自注意力机制/探针稀疏注意力机制Key words
deep reinforcement learning/self-attention/prob-sparse attention分类
信息技术与安全科学引用本文复制引用
赵婷婷,丁翘楚,马冲,陈亚瑞,王嫄..基于探针稀疏注意力机制的门控Transformer模型[J].天津科技大学学报,2024,39(3):56-63,8.基金项目
国家自然科学基金项目(61976156) (61976156)
天津市企业科技特派员项目(20YDTPJC00560) (20YDTPJC00560)