| 注册
首页|期刊导航|太原理工大学学报|基于多任务自适应知识蒸馏的语音增强

基于多任务自适应知识蒸馏的语音增强

张刚敏 李雅荣 贾海蓉 王鲜霞 段淑斐

太原理工大学学报2024,Vol.55Issue(4):720-726,7.
太原理工大学学报2024,Vol.55Issue(4):720-726,7.DOI:10.16355/j.tyut.1007-9432.20230259

基于多任务自适应知识蒸馏的语音增强

Speech Enhancement Based on Multi-Task Adaptive Knowledge Distillation

张刚敏 1李雅荣 1贾海蓉 1王鲜霞 2段淑斐1

作者信息

  • 1. 太原理工大学电子信息与光学工程学院,山西晋中 030600
  • 2. 太原理工大学数学学院,山西晋中 030600
  • 折叠

摘要

Abstract

[Purposes]In order to solve the computational cost problem of complex model in time and hardware,and improve the performance of speech enhancement algorithm,a speech en-hancement algorithm using multi-task adaptive knowledge distillation is proposed.[Methods]First,the idea of knowledge distillation is adopted to solve the problems that the existing speech enhancement model is too large,has many parameters,and has high calculation cost.Second,the differences between different time-frequency units are fully considered,and the weighting fac-tor is introduced to optimize the traditional loss function to improve the network performance of students.In order to avoid the uncertainty of teacher network prediction affecting the perform-ance of student network,the knowledge distillation network of multi-task adaptive learning is built to better utilize the correlation between different tasks to optimize the model.[Findings]The simulation results show that the proposed algorithm can effectively improve the performance of speech enhancement model while reducing the number of parameters and shortening the calcu-lation time.

关键词

语音增强/知识 蒸馏/多任务自适应学习/加权损 失函数

Key words

speech enhancement/knowledge distillation/multi-task adaptive learning/weigh-ted loss function

分类

信息技术与安全科学

引用本文复制引用

张刚敏,李雅荣,贾海蓉,王鲜霞,段淑斐..基于多任务自适应知识蒸馏的语音增强[J].太原理工大学学报,2024,55(4):720-726,7.

基金项目

国家自然科学基金资助项目(12004275) (12004275)

Shanxi Scholarship Council of China(2020-042) (2020-042)

山西省自然科学基金资助项目(20210302123186) (20210302123186)

太原理工大学学报

OA北大核心CSTPCD

1007-9432

访问量0
|
下载量0
段落导航相关论文