|国家科技期刊平台
首页|期刊导航|工矿自动化|基于视觉与激光融合的井下灾后救援无人机自主位姿估计

基于视觉与激光融合的井下灾后救援无人机自主位姿估计OA北大核心CSTPCD

Autonomous pose estimation of underground disaster rescue drones based on visual and laser fusion

中文摘要英文摘要

无人机在灾后矿井的自主导航能力是其胜任抢险救灾任务的前提,而在未知三维空间的自主位姿估计技术是无人机自主导航的关键技术之一.目前基于视觉的位姿估计算法由于单目相机无法直接获取三维空间的深度信息且易受井下昏暗光线影响,导致位姿估计尺度模糊和定位性能较差,而基于激光的位姿估计算法由于激光雷达存在视角小、扫描图案不均匀及受限于矿井场景结构特征,导致位姿估计出现错误.针对上述问题,提出了一种基于视觉与激光融合的井下灾后救援无人机自主位姿估计算法.首先,通过井下无人机搭载的单目相机和激光雷达分别获取井下的图像数据和激光点云数据,对每帧矿井图像数据均匀提取ORB特征点,使用激光点云的深度信息对ORB特征点进行深度恢复,通过特征点的帧间匹配实现基于视觉的无人机位姿估计.其次,对每帧井下激光点云数据分别提取特征角点和特征平面点,通过特征点的帧间匹配实现基于激光的无人机位姿估计.然后,将视觉匹配误差函数和激光匹配误差函数置于同一位姿优化函数下,基于视觉与激光融合来估计井下无人机位姿.最后,通过视觉滑动窗口和激光局部地图引入历史帧数据,构建历史帧数据和最新估计位姿之间的误差函数,通过对误差函数的非线性优化完成在局部约束下的无人机位姿的优化和修正,避免估计位姿的误差累计导致无人机轨迹偏移.模拟矿井灾后复杂环境进行仿真实验,结果表明:基于视觉与激光融合的位姿估计算法的平均相对平移误差和相对旋转误差分别为 0.001 1 m和 0.000 8°,1帧数据的平均处理时间低于 100 ms,且算法在井下长时间运行时不会出现轨迹漂移问题;相较于仅基于视觉或激光的位姿估计算法,该融合算法的准确性、稳定性均得到了提高,且实时性满足要求.

The autonomous navigation capability of drones in post disaster mines is a prerequisite for their capability to perform rescue and disaster relief tasks.The autonomous pose estimation technology in unknown three-dimensional space is one of the key technologies for autonomous navigation of drones.At present,vision based pose estimation algorithms are prone to blurred scale and poor positioning performance due to the inability of monocular cameras to directly obtain depth information in three-dimensional space and the susceptibility to underground dim light.However,laser based pose estimation algorithms are prone to errors due to the small viewing angle,uneven scanning patterns,and constraints on the structural characteristics of mining scenes caused by LiDAR.In order to solve the above problems,an autonomous pose estimation algorithm of underground disaster rescue drones based on visual and laser fusion is proposed.Firstly,the monocular camera and LiDAR carried by the underground drone are used to obtain the image data and laser point cloud data of the mine.The ORB feature points are uniformly extracted from each frame of the mine image data.The depth information of the laser point cloud is used to recover the ORB feature points.The visual based drone pose estimation is achieved through inter frame matching of the feature points.Secondly,feature corner points and feature plane points are extracted from each frame of underground laser point cloud data,and laser based drone pose estimation is achieved through inter frame matching of feature points.Thirdly,the visual matching error function and the laser matching error function are placed under the same pose optimization function,and the pose of the underground drone is estimated based on vision and laser fusion.Finally,historical frame data is introduced through visual sliding windows and laser local maps to construct an error function between the historical frame data and the latest estimated pose.The optimization and correction of the drone pose under local constraints are completed through nonlinear optimization of the error function,avoiding the accumulation of estimated pose errors that may lead to trajectory deviation of the drone.The simulation experiments that simulating the complex environment after a mine disaster are conducted.The results show that the average relative translation error and relative rotation error of the pose estimation algorithm based on visual and laser fusion are 0.001 1 m and 0.000 8°,respectively.The average processing time of one frame of data is less than 100 ms.The algorithm does not experience trajectory drift during long-term operation underground.Compared to pose estimation algorithms based solely on vision or laser,the accuracy and stability of this fusion algorithm have been improved,and the real-time performance meets the requirements.

何怡静;杨维

北京交通大学 电子与信息工程学院,北京 100044

矿山工程

井下无人机位姿估计单目相机激光雷达视觉与激光融合ORB特征点

underground dronespose estimationmonocular cameraLidarvisual and laser fusionORB feature points

《工矿自动化》 2024 (004)

94-102 / 9

国家自然科学基金资助项目(51874299).

10.13272/j.issn.1671-251x.2023080124

评论