电子科技2024,Vol.37Issue(4):1-7,7.DOI:10.16180/j.cnki.issn1007-7820.2024.04.001
基于Transformer的多编码器端到端语音识别
Multi-Encoder Transformer for End-to-End Speech Recognition
摘要
Abstract
The current widely used Transformer model has a strong ability to capture global dependencies,but it tends to ignore local feature information at shallow layers.To solve this problem,this study proposes a method using multiple encoders to improve the ability of speech feature extraction.An additional convolutional encoder branch is added to strengthen the capture of local feature information,make up for the neglect of local feature information in shallow Transformer,and effectively realize the integration of global and local dependencies of audio feature se-quences.In other words,a multi-encoder model based on Transformer is proposed.Experiments on the open-source Chinese Mandarin data set Aishell-1 show that without an external language model,the proposed Transformer-based multi-encoder model has a relative reduction of 4.00%in character error rate when compared with the Transformer model.On the internal non-public Shanghainese dialect data set,the performance improve-ment of the proposed model is more obvious,and the character error rate is reduced by 48.24%from 19.92%to 10.31%.关键词
Transformer/语音识别/端到端/深度神经网络/多编码器/多头注意力/特征融合/卷积分支网络Key words
Transformer/speech recognition/end-to-end/deep neural networks/multi-encoder/multi-head attention/feature fusion/convolution branch networks分类
信息技术与安全科学引用本文复制引用
庞江飞,孙占全..基于Transformer的多编码器端到端语音识别[J].电子科技,2024,37(4):1-7,7.基金项目
国防基础科研计划(JCKY2019413D001) (JCKY2019413D001)
上海理工大学医工交叉项目(10-21-302-413)National Defense Basic Scientific Research Program(JCKY2019413D001) (10-21-302-413)
Medical and Engineering Cross Project of University of Shanghai for Science and Technology(10-21-302-413) (10-21-302-413)