|国家科技期刊平台
首页|期刊导航|机器人|一种室内弱纹理环境下的视觉SLAM算法

一种室内弱纹理环境下的视觉SLAM算法OA北大核心CSTPCD

A Visual SLAM Algorithm in Indoor Weak Texture Environment

中文摘要英文摘要

针对室内机器人在弱纹理环境下视觉SLAM(同步定位与地图构建)不鲁棒、不精确的问题,提出一种视觉惯性SLAM优化算法.首先,利用基于注意力机制的深度学习模块直接对相邻两帧图像进行特征匹配,结合传统特征检测方法,解决无法提取足够特征点的问题;其次,建立相机深度置信模型,在计算相机帧间位姿变换关系前,引入空间点的深度置信概率来减小远距离特征的漂移误差;最后,将参与后端优化的所有匹配特征点的深度置信度作为动态鲁棒核函数的分段阈值,优化传统光束平差法,协调整体运动轨迹.实际场景实验表明,该算法在弱纹理环境下有明显的鲁棒性,相较于VINS-RGBD算法,该算法的绝对轨迹误差降低了 50.38%,相对轨迹误差降低了 85.75%.

A visual-inertial SLAM(simultaneous localization and mapping)optimization algorithm is proposed to address the problem that visual SLAM is not robust and accurate for indoor robots in weak texture environment.Firstly,the deep learning module based on the attention mechanism is used to directly match features between two adjacent frames,and then the traditional feature detection methods are adopted to solve the problem that enough feature points can't be extracted.Secondly,a camera depth confidence model is established to reduce the drift error of long-distance features by introducing the depth confidence probability of the spatial points before calculating the inter-frame pose transformation of the camera.Finally,the deep confidence of all matched feature points involved in back-end optimization is used as the piecewise threshold of dynamic robust kernel function to optimize the traditional bundle adjustment and coordinate the overall motion trajectory.It is shown in real scene experiments that the proposed algorithm is significantly robust in weak texture environments.Compared with VINS-RGBD algorithm,the absolute trajectory error of the proposed algorithm is reduced by 50.38%and the relative trajectory error is reduced by 85.75%.

蔡显奇;王晓松;李玮

中国计量大学机电工程学院,浙江杭州 310018

视觉SLAM弱纹理环境RGB-D相机特征置信鲁棒核函数

visual SLAM(simultaneous localization and mapping)weak texture environmentRGB-D camerafeature confidencerobust kernel function

《机器人》 2024 (003)

284-293,304 / 11

10.13973/j.cnki.robot.230253

评论