| 注册
首页|期刊导航|信号处理|基于FPGA的神经网络硬件加速器研究综述

基于FPGA的神经网络硬件加速器研究综述

孟庆昊 边丽蘅

信号处理2025,Vol.41Issue(12):1855-1873,19.
信号处理2025,Vol.41Issue(12):1855-1873,19.DOI:10.12466/xhcl.2025.12.001

基于FPGA的神经网络硬件加速器研究综述

Research on FPGA-Based Neural Network Hardware Accelerators:A Review

孟庆昊 1边丽蘅1

作者信息

  • 1. 北京理工大学临近空间环境特性及效应全国重点实验室&空地一体新航行系统技术全国重点实验室,北京 100081
  • 折叠

摘要

Abstract

With the widespread application of deep learning in fields such as computer vision,natural language process-ing,and autonomous driving,the complexity and scale of neural network models have grown explosively.This growth poses significant challenges to hardware computing capabilities.Traditional general-purpose computing platforms,such as CPUs and GPUs,are increasingly falling short in energy efficiency,real-time performance,and flexibility,particu-larly in edge computing and low-power scenarios,where their performance often fails to meet expectations.Conse-quently,algorithm optimization and hardware acceleration for neural networks have become prominent topics in current research.To address these challenges,field-programmable gate arrays(FPGAs),as reconfigurable hardware,have demonstrated unique advantages in deep learning hardware acceleration due to their parallelism,low power consump-tion,and flexible programmability.This paper systematically reviews FPGA-based neural network hardware accelera-tion technologies,covering the latest research progress in computing architecture optimization,hierarchical memory design,and model compression methods.It provides a detailed analysis of the computational characteristics and hard-ware acceleration frameworks of mainstream neural network models,including convolutional neural networks(CNNs),recurrent neural networks(RNNs),generative adversarial networks(GANs),and Transformers.Addition-ally,the paper outlines core FPGA acceleration techniques such as parallel computing architectures with double-buffering strategies,sparse matrix computation,and structured pruning.Finally,it discusses the challenges faced by FPGA-based neural network accelerators,including model optimization under resource-constrained conditions and the limited adaptability between algorithms and hardware.A series of feasible solutions are proposed,and future research directions are explored.

关键词

神经网络/现场可编程门阵列/硬件加速/最优化

Key words

neural network/field-programmable gate array/hardware acceleration/optimization

分类

信息技术与安全科学

引用本文复制引用

孟庆昊,边丽蘅..基于FPGA的神经网络硬件加速器研究综述[J].信号处理,2025,41(12):1855-1873,19.

基金项目

国家自然科学基金(62322502,61827901,62088101) The National Natural Science Foundation of China(62322502,61827901,62088101) (62322502,61827901,62088101)

信号处理

OA北大核心

1003-0530

访问量0
|
下载量0
段落导航相关论文