| 注册
首页|期刊导航|太赫兹科学与电子信息学报|基于"存算一体"的卷积神经网络加速器

基于"存算一体"的卷积神经网络加速器

卢莹莹 孙翔宇 计炜梁 邢占强

太赫兹科学与电子信息学报2025,Vol.23Issue(2):170-174,5.
太赫兹科学与电子信息学报2025,Vol.23Issue(2):170-174,5.DOI:10.11805/TKYDA2023242

基于"存算一体"的卷积神经网络加速器

Convolutional Neural Network accelerator based on computing in memory

卢莹莹 1孙翔宇 2计炜梁 2邢占强2

作者信息

  • 1. 中国工程物理研究院 电子工程研究所,四川 绵阳 621999||中国工程物理研究院 微系统与太赫兹研究中心,四川 成都 610200||中国工程物理研究院 研究生院,北京 100088
  • 2. 中国工程物理研究院 电子工程研究所,四川 绵阳 621999||中国工程物理研究院 微系统与太赫兹研究中心,四川 成都 610200
  • 折叠

摘要

Abstract

The implementation scheme of Convolutional Neural Network(CNN)based on Von Neumann architecture is difficult to meet the requirements of high performance and low power consumption.Therefore,a CNN accelerator based on storage-computing integrated architecture is designed.By using the circuit structure of Resistive Random Access Memory(RRAM)to realize the storage-computing integrated architecture,and using efficient data input pipeline and CNN processing unit to process large-scale image data,high-performance digital image recognition is realized.The simulation results show that the CNN accelerator has faster computing capability and its clock frequency can reach 100 MHz;in addition,the area of the structure is 300 742 μm2,which is 56.6%of that of the conventional design method.The acceleration module designed in this paper greatly improves the speed and decreases the energy consumption of CNN accelerator.It shows guiding significance for the design of high performance neural network accelerator.

关键词

存算一体/卷积神经网络(CNN)/加速器/输入管道/处理单元

Key words

computing in memory/Convolutional Neural Network(CNN)/accelerator/input pipeline/processing unit

分类

信息技术与安全科学

引用本文复制引用

卢莹莹,孙翔宇,计炜梁,邢占强..基于"存算一体"的卷积神经网络加速器[J].太赫兹科学与电子信息学报,2025,23(2):170-174,5.

太赫兹科学与电子信息学报

2095-4980

访问量0
|
下载量0
段落导航相关论文