农业机械学报2024,Vol.55Issue(11):160-170,11.DOI:10.6041/j.issn.1000-1298.2024.11.018
基于OrchardYOLOP的火龙果园多任务视觉感知方法
Multi-task Visual Perception Method in Dragon Orchards Based on OrchardYOLOP
摘要
Abstract
In the face of challenges such as complex terrains,fluctuating lighting,and unstructured environments,modern orchard robots require the efficient processing of a vast array of environmental information.Traditional algorithms that sequentially execute multiple single tasks are limited by computational power which are unable to meet these demands.Aiming to address the requirements for real-time performance and accuracy in multi-tasking autonomous driving robots within dragon fruit orchard environments.Building upon the YOLOP,focus attention convolution module was introduced,C2F and SPPF modules were employed,and the loss function for segmentation tasks was optimized,culminating in the OrchardYOLOP.Experiments demonstrated that OrchardYOLOP achieved a precision of 84.1%in target detection tasks,an mIoU of 89.7%in drivable area segmentation tasks,and an mIoU increased to 90.8%in fruit tree region segmentation tasks,with an inference speed of 33.33 frames per second and a parameter count of only 9.67 × 106.Compared with the YOLOP algorithm,not only did it meet the real-time requirements in terms of speed,but also it significantly improved accuracy,addressing key issues in multi-task visual perception in dragon fruit orchards and providing an effective solution for multi-task autonomous driving visual perception in unstructured environments.关键词
火龙果园/多任务/视觉感知/语义分割/目标检测/YOLOPKey words
dragon orchard/multi-task/visual perception/semantic segmentation/object detection/YOLOP分类
信息技术与安全科学引用本文复制引用
赵文锋,黄袁爵,钟敏悦,李振源,罗梓涛,黄家俊..基于OrchardYOLOP的火龙果园多任务视觉感知方法[J].农业机械学报,2024,55(11):160-170,11.基金项目
国家重点研发计划项目(2023YFD1400700) (2023YFD1400700)