| 注册
首页|期刊导航|智能化农业装备学报(中英文)|果园作业机器人自主导航多任务联合感知方法研究

果园作业机器人自主导航多任务联合感知方法研究

张津国 蔡建峰 姜蓉蓉 余山山 王蓬勃

智能化农业装备学报(中英文)2025,Vol.6Issue(2):35-43,9.
智能化农业装备学报(中英文)2025,Vol.6Issue(2):35-43,9.DOI:10.12398/j.issn.2096-7217.2025.02.003

果园作业机器人自主导航多任务联合感知方法研究

Multi-task joint perception framework for autonomous navigation in orchard robotics

张津国 1蔡建峰 1姜蓉蓉 2余山山 3王蓬勃1

作者信息

  • 1. 苏州大学机电工程学院,江苏 苏州,215137||江苏省具身智能机器人技术重点实验室,江苏 苏州,215137
  • 2. 苏州漕阳生态农业发展有限公司,江苏 苏州,215143
  • 3. 农业农村部南京农业机械化研究所,江苏 南京,210014
  • 折叠

摘要

Abstract

Orchard operational scenarios present significant challenges for visual perception,including high vegetation heterogeneity,dynamic lighting variations,and diverse target morphologies.Traditional single-task visual perception models suffer from low feature reusability and high computational redundancy,thereby inadequately addressing real-time environmental perception demands for agricultural robots.This study proposes AgriYOLOP,a lightweight multi-task collaborative perception framework specifically designed for orchard environments.Through a systematic reconstruction of the YOLOP architecture,AgriYOLOP incorporates an efficient backbone network,enhanced anchor-free detection techniques,feature pyramid networks(FPN),path aggregation networks(PAN),and task-adaptive loss function weighting strategies.This framework faciliates parallel collaborative processing of three critical percetption tasks:trunk detection,obstacle recognition,and traversable region segmentation.The proposed framework was validated on a self-constructed orchard dataset comprising 4 765 images(1 280 pixels×720 pixels),captured across diverse seasons,lighting conditions,and vegetation growth stages.Experimental results demonstrate that AgriYOLOP achieves 92.7%precision,94.6%recall,and 96.7%mAP50 in object detection tasks,along with 98.3%recall,99.1 F1 score,and 98.1%mIoU in traversable region segmentation.Deployed on an NVIDIA RTX 4060 platform,the model attains 69 f/s real-time inference speed with only 14 M parameters.Comparative experiments reveal that the multi-task collaborative architecture significantly enhances feature-sharing efficiency,reducing inference latency by 32.6%compared to single-task models while improving robustness to illumination and seasonal variations.This approach effectively mitigates the conventional trade-off between target detection accuracy and semantic segmentation efficiency encountered in real-time agricultural robotic applications.The study provides a high-precision,low-latency real-time perception solution for autonomous orchard robot navigation.

关键词

多任务学习/果园环境感知/目标检测/语义分割/农业机器人

Key words

multi-task learning/orchard environment perception/target detection/semantic segmentation/agricultural robot

分类

农业科技

引用本文复制引用

张津国,蔡建峰,姜蓉蓉,余山山,王蓬勃..果园作业机器人自主导航多任务联合感知方法研究[J].智能化农业装备学报(中英文),2025,6(2):35-43,9.

基金项目

国家重点研发计划项目(2022YFB4702202) (2022YFB4702202)

江苏省农业农村厅农机研发制造推广应用一体化试点项目(JSYTH07) National Key Research and Development Program of China(2022YFB4702202) (JSYTH07)

Integrated Pilot Project for Agricultural Machinery Research,Development,Manufacturing,Promotion and Application of Jiangsu Provincial Depart-ment of Agriculture and Rural Affairs(JSYTH07) (JSYTH07)

智能化农业装备学报(中英文)

2096-7217

访问量0
|
下载量0
段落导航相关论文