|国家科技期刊平台
首页|期刊导航|农业机械学报|基于多视角三维点云融合的采棉机器人视觉感知方法

基于多视角三维点云融合的采棉机器人视觉感知方法OA北大核心CSTPCD

Visual Perception Method for Cotton-picking Robots Based on Fusion of Multi-view 3D Point Clouds

中文摘要英文摘要

针对传统采棉机器人因单一视角和二维图像信息带来的视觉感知局限问题,本文提出了一种多视角三维点云配准方法,以增强采棉机器人实时三维视觉感知能力.采用4台固定位姿的Realsense D435型深度相机,从不同视角获取棉花点云数据.通过AprilTags算法标定出深度相机RGB成像模块与Tag标签的相对位姿,并基于深度相机中RGB成像模块与立体成像模块坐标系间的转换关系,解算出各个相机间点云坐标的对应变换,进而实现点云间的融合配准.结果表明,本文配准方法的全局配准平均距离误差为0.93 cm,平均配准时间为0.025 s,表现出较高的配准精度和效率.同时,为满足采棉机器人感知的实时性要求,本文对算法中点云获取、背景滤波和融合配准等步骤进行了效率分析及优化,最终整体算法运行速度达到29.85 f/s,满足采棉机器人感知系统实时性需求.

Traditional cotton-picking robots face visual perception challenges due to their reliance on single viewpoint and two-dimensional imagery.To address this,a multi-view 3D point cloud registration method was introduced,enhancing these robots'real-time 3D visual perception.Four fixed-pose Realsense D435 depth cameras were utilized to capture point cloud data of the cotton from multiple viewpoints.To ensure the quality of fusion registration,each camera underwent rigorous imaging distortion calibration and depth error adjustment before operation.With the help of AprilTags algorithm,the relative pose between the RGB imaging modules of the cameras and their AprilTag labels was calibrated,which clarified the transformation relationship between the coordinate systems of the RGB and stereo imaging modules.As a result,the transformations of point cloud coordinates between cameras can be deduced,ensuring accurate fusion and alignment.The findings showed that this method had an average global alignment error of 0.93 cm and took 0.025 s on average,highlighting its accuracy and efficiency against the commonly used methods.To cater to the real-time demands of cotton-picking robots,processes for point cloud acquisition,background filtering,and fusion registration were also optimized.Impressively,the algorithm's speed tops at 29.85 f/s,meeting the real-time demands of the robot's perception system.

刘坤;王晓;朱一帆

南京工程学院自动化学院,南京 211167

计算机与自动化

采棉机器人视觉感知三维点云融合AprilTags算法

cotton-picking robotsvisual perception3D point cloudfusionAprilTags algorithm

《农业机械学报》 2024 (004)

74-81 / 8

江苏省高等学校基础科学(自然科学)研究项目(23KJA460008)和江苏省研究生科研与实践创新计划项目(SJCX23_1180)

10.6041/j.issn.1000-1298.2024.04.007

评论