自动化学报2026,Vol.52Issue(1):18-51,34.DOI:10.16383/j.aas.c250394
面向具身操作的视觉-语言-动作模型综述
Survey of Vision-Language-Action Models for Embodied Manipulation
摘要
Abstract
Embodied intelligence systems,which enhance agent capabilities through continuous environment inter-actions,have garnered significant attention from both academia and industry.Vision-language-action(VLA)mod-els,inspired by advancements in large foundation models,serve as universal robotic control frameworks that sub-stantially improve agent-environment interaction capabilities in embodied intelligence systems.This expansion has broadened application scenarios for embodied intelligence robots.This survey comprehensively reviews VLA models for embodied manipulation.Firstly,it introduces the developmental history of VLA models.Subsequently,it con-ducts a detailed analysis of current research status across 5 critical dimensions:VLA model structures,training datasets,pre-training methods,post-training methods,and model evaluation.Finally,it summarizes key challenges in VLA model development and real-world deployment,while outlining promising future development directions.关键词
具身智能/视觉-语言-动作模型/机器人/基础模型Key words
embodied intelligence/vision-language-action models/robotics/foundation models引用本文复制引用
李浩然,陈宇辉,崔文博,刘卫恒,刘锴,周明才,张正涛,赵冬斌..面向具身操作的视觉-语言-动作模型综述[J].自动化学报,2026,52(1):18-51,34.基金项目
国家自然科学基金(62136008,62173324)资助 Supported by National Natural Science Foundation of China(62136008,62173324) (62136008,62173324)