基于单目相机和多线激光雷达的联合标定方法研究OA北大核心CSTPCD
Study on joint calibration method based on monocular camera and multi-line lidar
目的 针对相机和激光雷达之间的外参标定问题,为了减少两者的标定误差,得到更高的标定精度,方法 提出一种基于非线性优化的联合标定方法.首先拍摄不同角度的棋盘格标定板图像,采集足够多数据后利用工具包进行相机内参标定,得到单目相机内参;然后在激光点云和图像中检测标定板的角点特征坐标,激光点云下的角点坐标由提取的标定板点云数据和其几何特征获得,通过拟合出的图形由上至下、由左至右确定图形顶点坐标及标定板行列数,可得到各角点坐标;相机角点特征坐标采用FAST角点检测,利用角点灰度信息确定角点坐标;根据点云检测特征点到图像投影误差构建目标函数,将外参求解转化为最小二乘问题;最后通过基于列文伯格-马夸尔特的非线性优化算法,迭代求解得到最优外参.结果 最终平均标定投影误差为 1.29像素,最大投影误差为 2.46像素,最小投影误差为 0.70像素,标准差为0.57像素.结论 根据标定的外参将点云投影至图像上可知,标定结果较好,并将结果运用到实际场景下视觉和激光雷达融合的SLAM算法中,运动轨迹平滑且与地图保持高度一致,本文方法标定过程简单,不需要棋盘格的真实物理尺寸,满足使用要求.
Objectives A joint calibration method based on nonlinear optimization to address the issue of ex-ternal parameter calibration between camera and lidar was proposed.The objective was to minimize the cali-bration error and achieve higher calibration accuracy.Methods First,images of the checkerboard calibra-tion board were taken from different angles,and the internal parameters of the camera were calibrated by using a toolkit,resulting in obtaining the internal parameters of the monocular camera.Then,the corner point feature coordinates of the calibration board were detected in both the laser point cloud and image.The coordinates in the laser point cloud were obtained by extracting the point cloud data of the calibration board and its geometric features,followed by determining the vertex coordinates through fitting the extracted pat-tern.The coordinates of each corner point were obtained by counting the number of rows and columns of the calibration board.The FAST corner detection was used to detect the corner point feature coordinates of the camera,and their coordinates were determined based on the gray information of the corner points.The objective function was constructed based on the projection error of the detected feature points from the point cloud to the image.The external parameter solution was transformed into the least squares problem.Finally,the optimal external parameters were obtained through an iterative solution by using the Levenberg-Marquardt nonlinear optimization algorithm.Results The final average calibration error reached 1.29 pixels,with a maximum error of 2.46 pixels,a minimum error of 0.70 pixels,and a standard deviation of 0.57 pixels.Conclusions The calibration results demonstrated good accuracy,allowing for the projection of the point cloud onto the image.The results were applied to the visual and LiDAR fusion SLAM algorithm in practical scenarios,resulting in smooth motion trajectories highly consistent with the map.The calibration process was simple and convenient.It did not require the actual physical size of the checkerboard and meet practi-cal requirements.
代军;李文波;赵俊伟;袁兴起;王跃功;李东方;程晓琦;花岛直彥
河南理工大学 河南省先进电子封装材料精密成形国际联合实验室,河南 焦作 454000||河南理工大学 机械与动力工程学院,河南焦作 454000河南理工大学 机械与动力工程学院,河南焦作 454000平顶山平煤机煤矿机械装备有限公司,河南 平顶山 467000佛山科学技术学院 机电工程与自动化学院,广东 佛山 528225室兰工业大学 机器人与机械工程研究所,日本 室兰 0500071
计算机与自动化
多传感器融合单目相机激光雷达联合标定图像处理
multi-sensor fusionmonocular cameralidarjoint calibrationimage processing
《河南理工大学学报(自然科学版)》 2024 (002)
137-146 / 10
国家自然科学基金资助项目(62201151);河南省科技攻关项目(232102221028);河南省高等学校重点科研项目(22A460020);河南理工大学博士基金资助项目(B2016-22)
评论