| 注册
首页|期刊导航|天地一体化信息网络|稀疏递归神经网络的可扩展低功耗加速器

稀疏递归神经网络的可扩展低功耗加速器

金磐石 李俊杰 王静逸 李鹏翀 邢磊 李晓栋

天地一体化信息网络2023,Vol.4Issue(4):79-85,7.
天地一体化信息网络2023,Vol.4Issue(4):79-85,7.DOI:10.11959/j.issn.2096-8930.2023045

稀疏递归神经网络的可扩展低功耗加速器

Scalable Low Power Accelerator for Sparse Recurrent Neural Network

金磐石 1李俊杰 2王静逸 2李鹏翀 3邢磊 2李晓栋1

作者信息

  • 1. 中国建设银行股份有限公司,北京 100034
  • 2. 建信金融科技有限责任公司,上海 321004
  • 3. 浪潮电子信息产业股份有限公司,山东 济南 250000
  • 折叠

摘要

Abstract

The use of edge computing devices in bank outlets for passenger flow analysis,security protection,risk prevention and con-trol is increasingly widespread,among which the performance and power consumption of AI reasoning chips have become a very im-portant factor in the selection of edge computing devices.Aiming at the problems of recurrent neural network,such as high power con-sumption,weak reasoning performance and low energy efficiency,which were caused by data dependence and low data reusability,this paper realized a sparse RNN low-power accelerator with scalable voltage by using FPGA,and verifies it on the edge design and calcula-tion equipment.Firstly,the sparse-RNN was analyzed and the processing array was designed by network compression.Secondly,due to the unbalanced workload of sparse RNN,it introduced voltage scaling method to maintain low power consumption and high throughput.Experiments show that this method could significantly improve the RNN reasoning speed of the system and reduce the processing pow-er consumption of the chip.

关键词

RNN/稀疏/低功耗/加速方案

Key words

RNN/sparse/low power consumption/acceleration scheme

分类

信息技术与安全科学

引用本文复制引用

金磐石,李俊杰,王静逸,李鹏翀,邢磊,李晓栋..稀疏递归神经网络的可扩展低功耗加速器[J].天地一体化信息网络,2023,4(4):79-85,7.

天地一体化信息网络

OACSTPCD

2096-8930

访问量0
|
下载量0
段落导航相关论文