GCN引导模型视点的光学遥感道路提取网络OA北大核心CSTPCD
Optical remote sensing road extraction network based on GCN guided model viewpoint
在光学遥感图像中,道路易受遮挡物、铺装材料以及周围环境等多重因素的影响,导致其特征模糊不清.然而,现有道路提取方法即使增强其特征感知能力,仍在特征模糊区域存在大量误判.为解决上述问题,本文提出基于GCN引导模型视点的道路提取网络(RGGVNet).RGGVNet采用编解码结构,并设计基于GCN的视点引导模块(GVPG)在编解码器的连接处反复引导模型视点,从而增强对特征模糊区域的关注.GVPG利用GCN信息传播过程具有平均特征权重的特性,将特征图中不同区域道路显著性水平作为拉普拉斯矩阵,参与到GCN信息传播从而实现引导模型视点.同时,提出密集引导视点策略(DGVS),采用密集连接的方式将编码器、GVPG和解码器相互连接,确保有效引导模型视点的同时缓解优化困难.在解码阶段设计多分辨率特征融合(MRFF)模块,最小化不同尺度道路特征在特征融合和上采样过程中的信息偏移和损失.在两个公开遥感道路数据集中,本文方法IoU分别达到65.84%和69.36%,F1-score分别达到79.40%和81.90%.从定量和定性两方面实验结果可以看出,本文所提方法性能优于其他主流方法.
In optical remote sensing images,roads are easily affected by multiple factors such as obstruc-tions,pavement materials,and surrounding environments,resulting in blurred features.However,even if existing road extraction methods enhance their feature perception capabilities,they still suffer from a large number of misjudgments in feature-blurred areas.To address the above issues,this paper proposed the road extraction network based on GCN guided model viewpoint(RGGVNet).RGGVNet adopted the en-coder-decoder structure and designed a GCN based viewpoint guidance module(GVPG)to repeatedly guide the model viewpoint at the connection of the encoder and decoder,thereby enhancing attention to fea-ture blurred areas.GVPG took advantage of the fact that the GCN information propagation process had the characteristic of average feature weight,used the road salience levels in different areas as a Laplacian matrix,and participated in GCN information propagation to realize the guidance model perspective.At the same time,a dense guidance viewpoint strategy(DGVS)was proposed,which uses dense connections to connect the encoder,GVPG module,and decoder to each other to ensure effective guidance of model viewpoints while alleviating optimization difficulties.In the decoding stage,a multi-resolution feature fu-sion module(MRFF)was designed to minimize the information offset and loss of road features of different scales in the feature fusion and upsampling process.In two public remote sensing road datasets,the IoU of our method reached 65.84%and 69.36%,respectively,and the F1-score reached 79.40%and 81.90%,respectively.It can be seen from the quantitative and qualitative experimental results that the performance of our method is superior to other mainstream methods.
刘光辉;单哲;杨塬海;王恒;孟月波;徐胜军
西安建筑科技大学 信息与控制工程学院,陕西 西安 710055||西安市建筑制造智动化技术重点实验室,陕西 西安 710055
计算机与自动化
光学遥感图像道路提取深度神经网络图卷积网络
optical remote sensing imagesroad extractiondeep neural networkgraph convolution net-work
《光学精密工程》 2024 (010)
1552-1566 / 15
陕西省重点研发计划项目(No.2021SF-429);陕西省自然科学基础研究计划项目(No.2023-JC-YB-532)
评论