| 注册
首页|期刊导航|数据采集与处理|基于CUDA的GMM模型快速训练方法

基于CUDA的GMM模型快速训练方法

吴奎 宋彦 戴礼荣

数据采集与处理2012,Vol.27Issue(1):85-90,6.
数据采集与处理2012,Vol.27Issue(1):85-90,6.

基于CUDA的GMM模型快速训练方法

CUDA-Based Fast GMM Model Training Method and Its Application

吴奎 1宋彦 1戴礼荣1

作者信息

  • 1. 中国科学技术大学电子工程与信息科学系,合肥,230027
  • 折叠

摘要

Abstract

Due to its good property to provide an approximation to any distribution, Gaussian mixture model (GMM) is widely applied in the field of pattern recognition. Usually, the iterative expectation-maximization (EM) algorithm is applied to GMM parameter estimation. The computational complexity at model training procedure can become extremely high when large amounts of training data and large mixture number are engaged. The computed unified device architecture (CUDA) technology provided by NVIDIA Corporation can perform fast parallel computation by running thousands of threads simultaneously on graphic processing unit (GPU). A fast GMM model training implementation using CUDA is presented, which is especially applicable to large amounts of training data. The fast training implementation contains two parts, I. E. , the K-means algorithm for model initialization and the EM algorithm for parameter estimation. Furthermore, the fast training method is applied to language GMMs training. Experimental results show that language model training using GPU is about 26 times faster on NVIDIA GTS250 than the traditional implementation on one of the single core of Intel DualCore Pentium IV 3- 0 GHz CPU.

关键词

混合高斯模型/语种识别/图形处理单元/统一计算设备架构

Key words

Caussian mixture model (GMM)/ language identification/ graphic processing unit (GPU)/ computed unified device architecture (CUDA)

分类

信息技术与安全科学

引用本文复制引用

吴奎,宋彦,戴礼荣..基于CUDA的GMM模型快速训练方法[J].数据采集与处理,2012,27(1):85-90,6.

数据采集与处理

OA北大核心CSCDCSTPCD

1004-9037

访问量0
|
下载量0
段落导航相关论文