NM-SpMM:面向国产异构向量处理器的半结构化稀疏矩阵乘算法OA北大核心CSTPCD
NM-SpMM:A semi-structured sparse matrix multiplication algorithm for domestic heterogeneous vector processors
深度神经网络在自然语言处理、计算机视觉等领域取得了优异的成果,由于智能应用处理数据规模的增长和大模型的快速发展,对深度神经网络的推理性能要求越来越高,N∶M半结构化稀疏化技术成为平衡算力需求和应用效果的热点技术之一.国产异构向量处理器FT-M7032为智能模型处理中的数据并行和指令并行开发提供了较大空间.针对N∶M半结构化稀疏模型计算稀疏模式多样性,提出了一种面向FT-M7032的可灵活配置的稀疏矩阵乘算法NM-SpMM.NM-SpMM设计了一种高效的压缩偏移地址稀疏编码格式COA,避免了半结构化参数配置对稀疏数据访存计算的影响.基于COA编码,NM-SpMM对不同维度稀疏矩阵计算进行了细粒度优化.在FT-M7032单核上的实验结果表明,相较于稠密矩阵乘,NM-SpMM 能获得1.73~21.00倍的加速,相较于采用CuSPARSE稀疏计算库的NVIDIA V100 GPU,能获得0.04~1.04倍的加速.
Deep neural networks have achieved excellent results in natural language processing,com-puter vision and other fields.Due to the growth of the scale of data processed by intelligent applications and the rapid development of large models,the inference performance of deep neural networks is in-creasingly demanding.N∶M semi-structured sparse scheme has become one of the hot technologies to balance the computing power demand and application effect.The domestic heterogeneous vector proces-sor FT-M7032 provides more space for data parallelism and instruction parallelism development in intel-ligent model processing.In order to address the challenges of N∶M semi-structured sparse model com-putation with various sparse patterns,a flexible configurable sparse matrix multiplication algorithm NM-SpMM is proposed for FT-M7032.NM-SpMM designs an efficient compressed offset address sparse encoding format COA,which avoids the impact of semi-structured parameter configuration on sparse da-ta access.Based on the COA,NM-SpMM performs fine-grained optimization of sparse matrix multipli-cation in different dimensions.The experimental results on FT-M7032 single core show that NM-SpMM can obtain 1.73~21.00 times speedup compared to dense matrix multiplication,and 0.04~1.04 times speedup compared to NVIDIA V100 GPU with CuSPARSE.
姜晶菲;何源宏;许金伟;许诗瑶;钱希福
国防科技大学计算机学院并行与分布计算全国重点实验室,湖南 长沙 410073
计算机与自动化
深度神经网络图形处理器向量处理器稀疏矩阵乘流水线
deep neural networkgraphics processing unitvector processorsparse matrix multiplica-tionpipeline
《计算机工程与科学》 2024 (007)
1141-1150 / 10
评论