| 注册
首页|期刊导航|计算机应用研究|强化学习的可解释方法分类研究

强化学习的可解释方法分类研究

唐蕾 牛园园 王瑞杰 行本贝 王一婷

计算机应用研究2024,Vol.41Issue(6):1601-1609,9.
计算机应用研究2024,Vol.41Issue(6):1601-1609,9.DOI:10.19734/j.issn.1001-3695.2023.09.0430

强化学习的可解释方法分类研究

Classification study of interpretable methods for reinforcement learning

唐蕾 1牛园园 1王瑞杰 1行本贝 1王一婷1

作者信息

  • 1. 长安大学信息工程学院,西安 710018
  • 折叠

摘要

Abstract

Reinforcement learning can achieve autonomous learning in dynamic and complex environments,which makes it widely used in fields such as law,medicine,and finance.However,reinforcement learning still faces many problems such as the unobservable global state space,strong dependence on the reward function,and uncertain causality,which results in its weak interpretability,seriously affecting its promotion in related fields.It will encounter limitations such as difficulty in ju-dging whether the decision-making violates social legal and moral requirements,whether it is accurate and trustworthy,etc.In order to further understand the current status of interpretability research in reinforcement learning,this article discussed from the aspects of interpretable models,interpretable strategies,environment interaction and visualization,etc.Based on these,this article systematically discussed the research status of reinforcement learning interpretability,classified and explained its explainable methods,and finally proposed the future development direction of reinforcement learning interpretability.

关键词

强化学习/可解释性/策略-值函数/环境交互/视觉解释

Key words

reinforcement learning/interpretability/strategy-value functions/environment interaction/visual interpretation

分类

信息技术与安全科学

引用本文复制引用

唐蕾,牛园园,王瑞杰,行本贝,王一婷..强化学习的可解释方法分类研究[J].计算机应用研究,2024,41(6):1601-1609,9.

计算机应用研究

OA北大核心CSTPCD

1001-3695

访问量0
|
下载量0
段落导航相关论文