|国家科技期刊平台
首页|期刊导航|计算机与现代化|基于轻量化的视频帧场景语义分割方法

基于轻量化的视频帧场景语义分割方法OACSTPCD

Semantic Segmentation of Video Frame Scene Based on Lightweight

中文摘要英文摘要

场景分割对于计算机理解道路环境至关重要,基于深度学习的大型语义分割模型通常能取得出色的分割性能,但因其庞大的参数量和计算量无法灵活地部署在边缘设备上.针对这个问题,本文从轻量化的角度出发提出一种高效的场景语义分割模型E-SegNet.首先,使用轻量化的特征提取模型EfficientNet-B0作为模型的编码器提取层次特征,然后,使用基于自注意力机制的CPAM与CCAM模块在空间和通道2个维度上建立深层特征中单个元素到全局中心元素的依赖关系,最后,融合深浅层的特征并输出最终预测结果.在视频帧数据集Camseq01上的实验结果表明本文提出的E-SegNet模型以不到DeeplabV3+模型1/10的参数量和大约1/4的计算量实现了更好的分割性能,体现了模型的有效性,同时为在边缘设备上部署轻量级模型提供了更多可行的方案.

Scene segmentation is crucial for computers to understand the road environment,the large semantic segmentation model based on deep learning can often achieve excellent segmentation performance,but it cannot be flexibly deployed on edge devices because of its large number of parameters and computation.To solve this problem,this paper proposes an efficient scene semantic segmentation model E-SegNet from the perspective of lightweight.Firstly,the lightweight feature extraction model EfficientNet-B0 is used as the encoder of the model to extract the hierarchical features.Then,CPAM and CCAM modules based on the self-attention mechanism are used to establish the dependency between the single element in the deep features and the global central element in the two dimensions of spatial and channel.Finally,the feature of deep and shallow layers are fused and the final prediction results are output.Experimental results on video frame data set Camseq01 show that the proposed E-SegNet model achieves better segmentation performance with less than 1/10 of the parameters of DeeplabV3+model and about 1/4 of the computational effort,which reflects the effectiveness of the model,and provides more schemes for deploying lightweight models on edge devices.

时现伟;范鑫

中共新疆维吾尔自治区委员会政法委员会,新疆 乌鲁木齐 830023新疆大学软件学院,新疆 乌鲁木齐 830091

计算机与自动化

深度学习轻量级场景分割空间注意力通道注意力

deep learninglightweightscene segmentationspatial attentionchannel attention

《计算机与现代化》 2024 (008)

49-53 / 5

新疆维吾尔自治区重点研发项目(2021B01002)

10.3969/j.issn.1006-2475.2024.08.009

评论