| 注册
首页|期刊导航|分析化学|光谱深度学习模型的可解释性研究进展

光谱深度学习模型的可解释性研究进展

刘恒钦 孙杰 闵红 安雅睿 刘曙

分析化学2025,Vol.53Issue(12):2020-2031,12.
分析化学2025,Vol.53Issue(12):2020-2031,12.DOI:10.19756/j.issn.0253-3820.251253

光谱深度学习模型的可解释性研究进展

Research Progress on Interpretability of Spectral Deep Learning Models

刘恒钦 1孙杰 1闵红 2安雅睿 3刘曙2

作者信息

  • 1. 上海理工大学材料与化学学院,上海 200093||上海海关工业品与原材料检测技术中心,上海 201210
  • 2. 上海海关工业品与原材料检测技术中心,上海 201210
  • 3. 上海理工大学材料与化学学院,上海 200093
  • 折叠

摘要

Abstract

Recently,deep learning(DL)has significantly advanced spectral analysis,enhancing precision and generalization through efficient data processing and feature extraction.However,the ″Ablack-box″ nature obscuring the decision process hinders practical application.Consequently,interpretability research has gained attention to improve model transparency,trustworthiness,and scientific reliability.This paper reviews advances in the interpretability of spectral DL models from 2019 to 2025.It outlines feature extraction mechanisms in models like 1D-CNNs and Transformers.The principles,characteristics,and applicability of various interpretability methods are reviewed from perspectives including feature importance evaluation,model reasoning process visualization,and intrinsic interpretability.Feature importance methods encompass gradient-based and perturbation-based approaches.Reasoning process interpretation includes dimensionality reduction visualization and convolutional layer feature maps.Intrinsic interpretability involves incorporating physical constraints and designing interpretable functional modules.Multi-dimensional interpretation combines various methods for a more comprehensive understanding.These approaches improve transparency and reliability while providing a scientific basis for identifying key spectral regions and elucidating mechanisms.Nevertheless,challenges remain,including an incomplete classification framework,a lack of evaluation standards,and balancing transparency with predictive performance.Future directions include developing intrinsically interpretable models with physical constraints,optimizing the performance-interpretability synergy,promoting multi-dimensional applications,establishing evaluation systems,and advancing practical deployment.

关键词

深度学习/可解释性/光谱分析/评述

Key words

Deep learning/Interpretability/Spectral analysis/Review

引用本文复制引用

刘恒钦,孙杰,闵红,安雅睿,刘曙..光谱深度学习模型的可解释性研究进展[J].分析化学,2025,53(12):2020-2031,12.

基金项目

中华人民共和国海关总署科技项目(No.2024HK186)资助. Supported by the Scientific Research Project of the General Administration of Customs of the People's Republic of China(No.2024HK186). (No.2024HK186)

分析化学

OA北大核心

0253-3820

访问量0
|
下载量0
段落导航相关论文