| 注册
首页|期刊导航|自动化学报|面向具身操作的视觉-语言-动作模型综述

面向具身操作的视觉-语言-动作模型综述

李浩然 陈宇辉 崔文博 刘卫恒 刘锴 周明才 张正涛 赵冬斌

自动化学报2026,Vol.52Issue(1):18-51,34.
自动化学报2026,Vol.52Issue(1):18-51,34.DOI:10.16383/j.aas.c250394

面向具身操作的视觉-语言-动作模型综述

Survey of Vision-Language-Action Models for Embodied Manipulation

李浩然 1陈宇辉 2崔文博 1刘卫恒 1刘锴 1周明才 1张正涛 1赵冬斌3

作者信息

  • 1. 中国科学院自动化研究所 北京 100190
  • 2. 中国科学院大学人工智能学院 北京 101408
  • 3. 北京中科慧灵机器人技术有限公司 北京 100080
  • 折叠

摘要

Abstract

Embodied intelligence systems,which enhance agent capabilities through continuous environment inter-actions,have garnered significant attention from both academia and industry.Vision-language-action(VLA)mod-els,inspired by advancements in large foundation models,serve as universal robotic control frameworks that sub-stantially improve agent-environment interaction capabilities in embodied intelligence systems.This expansion has broadened application scenarios for embodied intelligence robots.This survey comprehensively reviews VLA models for embodied manipulation.Firstly,it introduces the developmental history of VLA models.Subsequently,it con-ducts a detailed analysis of current research status across 5 critical dimensions:VLA model structures,training datasets,pre-training methods,post-training methods,and model evaluation.Finally,it summarizes key challenges in VLA model development and real-world deployment,while outlining promising future development directions.

关键词

具身智能/视觉-语言-动作模型/机器人/基础模型

Key words

embodied intelligence/vision-language-action models/robotics/foundation models

引用本文复制引用

李浩然,陈宇辉,崔文博,刘卫恒,刘锴,周明才,张正涛,赵冬斌..面向具身操作的视觉-语言-动作模型综述[J].自动化学报,2026,52(1):18-51,34.

基金项目

国家自然科学基金(62136008,62173324)资助 Supported by National Natural Science Foundation of China(62136008,62173324) (62136008,62173324)

自动化学报

0254-4156

访问量0
|
下载量0
段落导航相关论文