面向视网膜血管精细分割的多层级图卷积特征融合神经编解码网络OA北大核心CSTPCD
Multilayer Graph Convolutional Feature Fusion Neural Encoding and Decoding Network for Fine Segmentation of Retinal Vessels
眼底视网膜血管分割可以帮助医生对眼科疾病以及心脑血管疾病进行辅助诊断.然而,由于血管拓扑结构复杂,边界不清晰,分割难度增大.针对这些问题,在U型结构的基础上提出了一种基于图卷积特征融合网络.该网络使用图卷积模块对编码器特征中像素之间的全局上下文信息进行建模,弥补普通卷积缺少全局建模的能力;然后使用多尺度特征融合模块对编码器特征与解码器特征进行融合,减少特征层中噪声信息对分割结果的影响;最后使用多层次特征融合模块将解码器每一层特征融合输出,减少下采样过程中空间信息的缺失以及对深层特征的复用.在公开数据集DRIVE、CHASEDB1以及STARE上进行验证,F1值和AUC值均优于其他两个模型.
Fundus retinal vessels segmentation can assist doctors in the diagnosis of ophthalmic diseases and cardiovascular and cerebrovascular diseases.However,due to the complex topological structure of blood vessels and unclear boundaries,it greatly increases the difficulty of segmentation.A graph convolutional feature fusion network is proposed based on the U-shaped structure to address these issues.This network uses a graph convolution module to model the global contextual information between pixels in encoder features,making up for the lack of global modeling ability in ordinary convolutions.Then,a multi-scale feature fusion module is used to fuse the encoder features and decoder features to reduce the impact of noise information in the feature layer on the segmentation results.Finally,a multi-level feature fusion module is used to fuse and output the features of each layer of the decoder,reducing the loss of spatial information and the reuse of deep features during the downsampling process.Verified on the public datasets DRIVE,CHASEDB1,and START,the F1 values and the AUC values are better than the other two methods.
崔少国;张乐迁;文浩
重庆师范大学计算机与信息科学学院,重庆 401331
计算机与自动化
医学图像分割视网膜血管U型结构图卷积特征融合
medical image segmentationretinal vesselsU-shaped architecturegraph convolutionfeature fusion
《电子科技大学学报》 2024 (003)
404-413 / 10
国家自然科学基金(62003065);重庆市科技局自然基金面上项目(CSTB2022NSCQ-MSX1206);重庆市技术预见与制度创新项目(CSTB2022TFII-OFX0042);教育部人文社科规划基金(22YJA870005);重庆市教委重点项目(KJZD-K202200510);重庆市社会科学规划项目(2022NDYB119);重庆市教委人文社科项目(23SKGH072);重庆师范大学人才基金(20XLB004)
评论