|国家科技期刊平台
首页|期刊导航|信息与控制|基于注意力机制和视触融合的机器人抓取滑动检测

基于注意力机制和视触融合的机器人抓取滑动检测OA北大核心CSTPCD

Slip Detection for Robot Grasping Based on Attention Mechanism and Visuo-Tactile Fusion

中文摘要英文摘要

机器人抓取滑动检测对于抓取任务而言具有重要的意义.在抓取过程中,视觉和触觉作为判断抓取状态的关键模态信息,实现其高效融合仍具有挑战性.基于视触觉模态信息融合理念,提出一种新型视触觉融合模型,用于高效解决机器人抓取滑动检测问题.首先,该模型通过卷积神经网络、多尺度时序卷积网络提取视触觉数据的空间和时序特征信息;然后,使用注意力机制为视触觉特征分配权重,并通过多尺度时序卷积网络进行多模态信息融合;最后,通过全连接层输出机器人抓取状态的检测结果.使用7自由度XArm机械臂、D455 RGB摄像头和XELA触觉传感器进行数据采集.实验结果表明,基于该模型的机器人抓取滑动检测的准确率高达98.98%,该模型在可靠、顺利执行机器人抓取任务方面具有较好的研究与应用价值.

Slip detection for robot grasping is of great significance for grip tasks.In the grasping process,vision and touch are the key modal information for judging the grasping state.To achieve efficient visuotactile fusion remains challenging.A novel visual and tactile fusion model is pro-posed to address the issues of efficient slip detection for robot grasping based on the idea of visual and tactile fusion.First,the model extracts the spatial and temporal feature information of visual and haptic data using a convolutional neural network and a multiscale temporal convolutional net-work.Second,an attention mechanism is used to assign weights to visuotactile features,and multi-modal information fusion is performed through a multiscale temporal convolutional network.Final-ly,the detection results of the grasping state are acquired through one fully connected layer.Data acquisition is implemented using an XArm 7DOF robotic arm,D455 RGB camera,and XELA tact-ile sensor.The experimental results show that the accuracy rate of slip detection for robot grasping using the proposed model is as high as 9898%.The proposed model has good research and appli-cation value in the reliable and smooth execution of robot grip tasks.

黄兆基;高军礼;唐兆年;宋海涛;郭靖

广东工业大学自动化学院,广州 510006华南理工大学工商管理学院,广州 510640

计算机与自动化

机器人抓取视触融合多尺度时序卷积网络注意力机制

robot graspvisuo-tactile fusionmulti-scale temporal convolution networkattention mechanism

《信息与控制》 2024 (002)

191-198 / 8

国家自然科学基金项目(61803103);国家留学基金项目(201908440537)

10.13976/j.cnki.xk.2023.2598

评论