| 注册
首页|期刊导航|计算机工程|可解释人工智能研究进展

可解释人工智能研究进展

廖勇 韩小金 刘金林 汪浩

计算机工程2026,Vol.52Issue(3):41-61,21.
计算机工程2026,Vol.52Issue(3):41-61,21.DOI:10.19678/j.issn.1000-3428.0069925

可解释人工智能研究进展

Research Progress of Interpretable Artificial Intelligence

廖勇 1韩小金 1刘金林 1汪浩2

作者信息

  • 1. 重庆大学微电子与通信工程学院,重庆 400044
  • 2. 重庆首讯科技股份有限公司,重庆 401147
  • 折叠

摘要

Abstract

Artificial intelligence has made remarkable progress across many fields,encouraging countries to attach great importance to its research and development.However,the rapid development of artificial intelligence has also brought about a series of problems and threats,and overreliance on and blind trust in such models can lead to serious risks.Therefore,interpretable artificial intelligence has become a key element in building trusted and transparent intelligent systems,and its research and development requires immediate attention.This survey comprehensively summarizes the research progress on explainable artificial intelligence at home and abroad comprehensively from multiple dimensions and levels.Based on current research results in the industry,this survey subdivides the key technologies of explainable artificial intelligence into four categories:interpretation model,interpretation method,safety testing,and experimental verification,with the aim of clarifying the technical focus and development direction of each field.Furthermore,the survey explores specific application examples of explainable artificial intelligence across key industry sectors,including but not limited to education,healthcare,finance,autonomous driving,and justice,demonstrating the significant role it plays in enhancing decision-making transparency.Finally,this survey provides an in-depth analysis of the major technical challenges of interpretable artificial intelligence and presents future development trends,in addition to a special investigation and in-depth analysis of the interpretability of large models,which has attracted considerable attention recently.

关键词

可解释性/可信/人工智能/示范应用/大模型

Key words

interpretability/trustworthy/artificial intelligence/demonstration application/large model

分类

信息技术与安全科学

引用本文复制引用

廖勇,韩小金,刘金林,汪浩..可解释人工智能研究进展[J].计算机工程,2026,52(3):41-61,21.

基金项目

重庆市自然科学基金(CSTB2023NSCQ-MSX0025). (CSTB2023NSCQ-MSX0025)

计算机工程

1000-3428

访问量3
|
下载量0
段落导航相关论文