自动化学报2017,Vol.43Issue(7):1248-1256,9.DOI:10.16383/j.aas.2017.e150274
A Visual-attention-based3D Mapping Method for Mobile Robots
A Visual-attention-based 3D Mapping Method for Mobile Robots
摘要
Abstract
Human visual attention is highly selective.The artificial vision system that imitates this mechanism increases the efficiency,intelligence,and robustness of mobile robots in environment modeling.This paper presents a 3-D modeling method based on visual attention for mobile robots.This method uses the distance-potential gradient as motion contrast and combines the visual features extracted from the scene with a mean shift segment algorithm to detect conspicuous objects in the surrounding environment.This method takes the saliency of objects as priori information,uses Bayes' theorem to fuse sensor modeling and grid priori modeling,and uses the projection method to create and update the 3-D environment modeling.The results of the experiments and performance evaluation illustrate the capabilities of our approach in generating accurate 3-D maps.关键词
3-D Mapping/grid model/mobile robots/visual attentionKey words
3-D Mapping/grid model/mobile robots/visual attention引用本文复制引用
Binghua Guo,Hongyue Dai,Zhonghua Li..A Visual-attention-based3D Mapping Method for Mobile Robots[J].自动化学报,2017,43(7):1248-1256,9.基金项目
This work was supported in part by the Foundation of Guangdong Educational Committee (2014KTSCX191) and the National Natural Science Foundation of China (61201087). (2014KTSCX191)