广西医学2025,Vol.47Issue(8):1088-1098,11.DOI:10.11675/j.issn.0253-4304.2025.08.04
医学人工智能可解释性的研究与应用进展
Research and application progresses on interpretability of medical artificial intelligence
摘要
Abstract
The deep integration of artificial intelligence(AI)in healthcare has given rise to innovative paradigms such as precision diagnosis,personalized treatment,and proactive health.However,the opacity of AI decision-making(often termed the"black box"effect)can trigger clinical trust crises and monitor regulatory compliance challenges.Explainable artificial intelligence(XAI),by unveiling the logical framework behind model decisions,has emerged as a pivotal approach to reconciling the tension between AI's potential and its application bottlenecks.XAI is evolving from an auxiliary explanatory tool into a systematic solution.It is poised to shift medical AI from"outcome delivery"to"process transparency",laying the foundation for a trustworthy intelligent healthcare ecosystem.This paper systematically deconstructs the multidimensional implications of interpretability in medical AI,traces the evolution of key technologies,including feature importance analysis,causal inference,and multi-modal fusion,and delves into core challenges such as data heterogeneity and accountability demarcation,supported by empirical studies in cutting-edge applications like medical imaging diagnosis and drug discovery.关键词
人工智能/可解释性/医学/因果推断/临床决策支持Key words
Artificial intelligence/Interpretability/Medicine/Causal inference/Clinical decision-supporting分类
医药卫生引用本文复制引用
张会勇,王富博..医学人工智能可解释性的研究与应用进展[J].广西医学,2025,47(8):1088-1098,11.基金项目
广西壮族自治区自然科学基金面上项目(桂科发[2025]69号) (桂科发[2025]69号)
广西壮族自治区杰出青年科学基金(2023GXNSFFA026003) (2023GXNSFFA026003)