| 注册
首页|期刊导航|智慧农业(中英文)|融合PDE植物时序图像对比学习方法与GCN跳跃连接的U-Net温室甜樱桃图像分割方法

融合PDE植物时序图像对比学习方法与GCN跳跃连接的U-Net温室甜樱桃图像分割方法

胡玲艳 郭睿雅 郭占俊 徐国辉 盖荣丽 汪祖民 张宇萌 鞠博文 聂晓宇

智慧农业(中英文)2025,Vol.7Issue(3):131-142,12.
智慧农业(中英文)2025,Vol.7Issue(3):131-142,12.DOI:10.12133/j.smartag.SA202502008

融合PDE植物时序图像对比学习方法与GCN跳跃连接的U-Net温室甜樱桃图像分割方法

U-Net Greenhouse Sweet Cherry Image Segmentation Method Integrating PDE Plant Temporal Image Contrastive Learning and GCN Skip Connections

胡玲艳 1郭睿雅 1郭占俊 2徐国辉 1盖荣丽 1汪祖民 1张宇萌 1鞠博文 1聂晓宇1

作者信息

  • 1. 大连大学 信息工程学院,辽宁 大连 116622,中国
  • 2. 大连市现代农业生产发展服务中心,辽宁 大连 116021,中国
  • 折叠

摘要

Abstract

[Objective]Within the field of plant phenotyping feature extraction,the accurate delineation of small targets boundaries and the adequate recovery of spatial details during upsampling operations have long been recognized as significant obstacles hindering progress.To address these limitations,an improved U-Net architecture designed for greenhouse sweet cherry image segmentation.[Methods]Taking temporal phenotypic images of sweet cherries as the research subject,the U-Net segmentation model was em-ployed to delineate the specific organ regions of the plant.This architecture was referred to as the U-Net integrating self-supervised contrastive learning method for plant time-series images with priori distance embedding(PDE)pre-training and graph convolutional networks(GCN)skip connection for greenhouse sweet cherry image segmentation.To accelerate model convergence,the pre-trained weights derived from the PDE plant temporal image contrastive learning method were transferred to.Concurrently,the incorporation of a GCN local feature fusion layer was incorporated as a skip connection to optimize feature fusion,thereby providing robust techni-cal support for image segmentation task.The PDE plant temporal image contrastive learning method pre-training required the con-struction of image pairs corresponding to different phenological periods.A classification distance loss function,which incorporated prior knowledge,was employed to construct an Encoder with adjusted parameters.Pre-trained weights obtained from the PDE plant temporal image contrastive learning method were effectively transferred and and applied to the semantic segmentation task,enabling the network to accurately learn semantic information and detailed textures of various sweet cherry organs.The Encoder module per-forms multi-scale feature extraction by convolutional and pooling layers.This process enabled the hierarchical processing of the se-mantic information embedded in the input image to construct representations that progress transitions from low-level texture features to high-level semantic features.This allows consistent extraction of semantic features from images across various scales and abstrac-tion of underlying information,enhancing feature discriminability and optimizing modeling of complex targets.The Decoder module was employed to conduct up sampling operations,which facilitated the integration of features from diverse scales and the restoration of the original image resolution.This enabled the results to effectively reconstruct spatial details and significantly improve the efficien-cy of model optimization.At the interface between the Encoder and Decoder modules,a GCN layer designed for local feature fusion was strategically integrated as a skip connection,enabling the network to better capture and learn the local features in multi-scale im-ages.[Results and Discussions]Utilizing a set of evaluation metrics including accuracy,precision,recall,and F1-Score,an in-depth and rigorous assessment of the model's performance capabilities was conducted.The research findings revealed that the improved U-Net model achieved superior performance in semantic segmentation of sweet cherry images,with an accuracy of up to 0.955 0.Ablation experiments results further revealed that the proposed method attained a precision of 0.932 8,a recall of 0.927 4,and an F1-Score of 0.912 8.The accuracy of improved U-Net is higher by 0.069 9,0.028 8,and 0.042 compared to the original U-Net,U-Net with PDE plant temporal image contrastive learning method,and U-Net with GCN skip connections,respectively.Meanwhile the F1-Score is 0.078 3,0.033 8,and 0.043 8 higher respectively.In comparative experiments against DeepLabV3,Swin Transformer and Segment Anything Model segmentation methods,the proposed model surpassed the above models by 0.022 2,0.027 6 and 0.042 2 in accuracy;0.063 7,0.147 1 and 0.107 7 in precision;0.035 2,0.065 4 and 0.050 8 in recall;and 0.076 8,0.127 5 and 0.103 4 in F1-Score.[Conclu-sions]The incorporation of the PDE plant temporal image contrastive learning method and the GCN techniques was utilized to devel-op an advanced U-Net architecture that is specifically designed and optimized for the analysis of sweat cherry plant phenotyping.The results demonstrate that the proposed method is capable of effectively addressing the issues of boundary blurring and detail loss associ-ated with small targets in complex orchard scenarios.It enables the precise segmentation of the primary organs and background re-gions in sweet cherry images,thereby enhancing the segmentation accuracy of the original model.This improvement provides a solid foundation for subsequent crop modeling research and holds significant practical importance for the advancement of agricultural intel-ligence.

关键词

嵌入先验距离/迁移学习/图卷积网络/U-Net/跳跃连接/植物表型

Key words

priori distance embedding/transfer learning/GCN/U-Net/skip connection/plant phenotype

分类

信息技术与安全科学

引用本文复制引用

胡玲艳,郭睿雅,郭占俊,徐国辉,盖荣丽,汪祖民,张宇萌,鞠博文,聂晓宇..融合PDE植物时序图像对比学习方法与GCN跳跃连接的U-Net温室甜樱桃图像分割方法[J].智慧农业(中英文),2025,7(3):131-142,12.

基金项目

辽宁省科技计划重点项目(2022020655-JH1/109) (2022020655-JH1/109)

大连市科技创新基金项目(2022JJ12SN052) Key Projects of Liaoning Provincial Science and technology plan(2022020655-JH1/109) (2022JJ12SN052)

Dalian Science and Tech-nology Innovation Fund Project(2022JJ12SN052) (2022JJ12SN052)

智慧农业(中英文)

2096-8094

访问量0
|
下载量0
段落导航相关论文