网络分割与自适应突触可塑性融合的连续学习OA北大核心CSTPCD
Continual learning using split network and adaptive synaptic plasticity
鉴于脉冲神经网络在连续学习过程中会产生灾难性遗忘,以往的无监督方法需要足够大的网络规模才能达到良好的学习效果,但这将耗费大量的计算时间和资源,本研究旨在用较小的网络规模实现较好的连续学习效果.受神经科学的启发,提出了一个网络分割与已有自适应突触可塑性相结合的方法:首先把网络分割为与任务数相同且互不重叠的子网络,然后每个子网络通过自适应突触可塑性进行无监督学习.该方法易于实现,仅须很少的计算资源,并且允许小规模脉冲神经网络在多任务连续学习时保持高性能.研究结果表明:在小规模的脉冲神经网络上,采用网络分割方法比不采用网络分割的测试精度明显提高.对于4个数据集,平均提高约36%.
Spiking neural networks will suffers fromcatastrophic forgetting in the process of continual learning.Existing unsupervised methods need a large enough network size to achieve good learning results,but it will consume a lot of computing time and resources.This paper aims to achieve better continual learning effect with a small network scale.Inspired by neuroscience,a method of combining split network with existing adaptive synaptic plasticity was proposed:firstly,the network was divided into subnetworks with the same number of tasks and without overlapping each other,and then each subnetwork performed unsupervised learning through adaptive synaptic plasticity.This method is easy to implement,requires little computationaloverhead,and allows small-scale spiking neural networks to maintain high performance across large numbers of sequentially presented tasks.The results show a significantly higher test accuracy with split network than without the split network method on the small-scale spiking neural networks.The average increase is about 40%for the four datasets.
陈焕文
中南大学自动化学院,湖南 长沙 410083
计算机与自动化
网络分割脉冲神经网络连续学习灾难性遗忘无监督
network splitspiking neural networkscontinual learningcatastrophic forgettingunsupervised
《华中科技大学学报(自然科学版)》 2024 (003)
156-160 / 5
国家自然科学基金资助项目(52172169);湖南省自然科学基金资助项目(2021JJ30863).
评论