基于改进Transformer模型的多声源分离方法OACSTPCD
Multi-source Separation Method Based on Improved Transformer Model
目前主流的语音分离算法模型都是基于复杂的递归网络或Transformer网络,Transformer网络复杂度高导致训练难度大以及音频的高采样率导致在样本级别上使用超长输入从而获取不完全特征,不能直接对长语音特征序列进行直接建模出现特征丢失问题.对此,该文提出了一种基于Transformer的改进网络模型.首先,在原有Transformer网络模型编码器里新添加下采样块,计算不同时间尺度上的高级特征同时降低特征空间复杂度;其次,在Transformer网络模型的解码器里添加上采样层与编码器下采样层特征融合保证特征不丢失,提高模型分离能力;最后,在模型分离层里引入一种改进的滑动窗口注意力机制,滑动窗口使用循环移位技术,新的特征窗口中包含老的特征窗口特征同时融合特征边缘信息完成了特征窗口之间的信息交互,获得特征编码以及特征位置编码同时提高特征信息之间的相关系数.实验表明,使用SI-SNR评价标准达到13.5 dB,使用SDR评价指标达到14.1 dB,分离效果优于之前的方法.
The current mainstream speech separation algorithm models are all based on complex recursive network or Transformer network.The high complexity of Transformer network leads to difficult training,and the high sampling rate of audio leads to the use of long input at the sample level to obtain incomplete features.The feature loss problem occurs when long speech feature sequences cannot be directly modeled.For this,we propose an improved network model based on Transformer.Firstly,a new subsample block is added to the existing Transformer network model encoder to calculate advanced features on different time scales and reduce feature space complexity.Secondly,feature fusion between the upper sampling layer and the lower sampling layer of the encoder is added to the decoder of the Transformer network model to ensure no feature loss and improve model separation capability.Finally,an improved sliding window attention mechanism is introduced in the model separation layer.The sliding window uses circular shift technology,and the new feature window contains part of the old feature window and feature edge information to complete the information interaction between feature Windows,obtain feature coding and feature position coding,and improve the correlation coefficient between feature infor-mation.The experiment shows that the separation effect is better than that of the previous method,with SI-SNR evaluation standard reaching13.5 dB and SDR evaluation index reaching14.1 dB.
曾援;李剑;马明星;庞润嘉;贺斌
中北大学 信息与通信工程学院,山西 太原 030051||中北大学 省部共建动态测试技术国家重点实验室,山西 太原 030051
计算机与自动化
上下采样层Transformer特征编码滑动窗口注意力机制深度学习
upper and lower sampling layerTransformerfeature codingsliding window attention mechanismdeep learning
《计算机技术与发展》 2024 (005)
60-65 / 6
国家自然基金青年科学基金(61901419)
评论