| 注册
首页|期刊导航|计算机工程与应用|融合深度强化学习的卷积神经网络联合压缩方法

融合深度强化学习的卷积神经网络联合压缩方法

马祖鑫 崔允贺 秦永彬 申国伟 郭春 陈意 钱清

计算机工程与应用2025,Vol.61Issue(6):210-219,10.
计算机工程与应用2025,Vol.61Issue(6):210-219,10.DOI:10.3778/j.issn.1002-8331.2311-0002

融合深度强化学习的卷积神经网络联合压缩方法

Fusion of Deep Reinforcement Learning in Joint Compression Method for Convolutional Neural Network

马祖鑫 1崔允贺 1秦永彬 1申国伟 1郭春 1陈意 1钱清2

作者信息

  • 1. 贵州大学 计算机科学与技术学院 文本计算与认知智能教育部工程研究中心,贵阳 550025||贵州大学 公共大数据国家重点实验室,贵阳 550025||贵州大学 贵州省软件工程与信息安全特色重点实验室,贵阳 550025
  • 2. 贵州财经大学 信息学院,贵阳 550025
  • 折叠

摘要

Abstract

With the rise of concepts such as edge computing and edge intelligence,the lightweight deployment of convo-lutional neural network has gradually become a research hotspot.The traditional convolutional neural network compres-sion technique usually performs pruning and quantization strategies in stages and independently,but this method does not consider the interaction between pruning and quantification processes,so that it cannot achieve the optimal pruning and quantification results,which affects the performance of the compressed model.In order to solve the above problems,this paper proposes CoTrim,a joint compression method for neural networks based on deep reinforcement learning.CoTrim performs channel pruning and weight quantization at the same time,and uses the deep reinforcement learning algorithm to search for the global optimal pruning and quantization strategy to balance the impact of pruning and quantization on network performance.Experiments are conducted on the CIFAR-10 dataset on VGG and ResNet,and the experimental results show that for common single branch convolution and residual convolution structures,CoTrim is able to compress the model size of VGG16 to the original 1.41%with a precision loss of only 2.49 percentage points.Experiments are conducted on compact networks MobileNet and DenseNet on a complex dataset Imagenet-1K.The experimental results show that for deep separable convolutional structures and densely connected structures,CoTrim can still ensure accuracy loss within an acceptable range and achieve from 1/5 to 1/8 model size compression.

关键词

卷积神经网络/深度强化学习/模型压缩/通道剪枝/权值量化/边缘智能

Key words

convolutional neural network/deep reinforcement learning/model compression/channel pruning/weight quantization/edge intelligence

分类

信息技术与安全科学

引用本文复制引用

马祖鑫,崔允贺,秦永彬,申国伟,郭春,陈意,钱清..融合深度强化学习的卷积神经网络联合压缩方法[J].计算机工程与应用,2025,61(6):210-219,10.

基金项目

国家自然科学基金(62102111) (62102111)

贵州省科技计划项目(黔科合重大专项字[2024]003号) (黔科合重大专项字[2024]003号)

贵州省高等学校大数据安全与网络安全创新团队(黔教技[2023]052号). (黔教技[2023]052号)

计算机工程与应用

OA北大核心

1002-8331

访问量8
|
下载量0
段落导航相关论文