| 注册
首页|期刊导航|北京工业大学学报(社会科学版)|从"可解释"到"可信任":人工智能治理的逻辑重构

从"可解释"到"可信任":人工智能治理的逻辑重构

郭小东

北京工业大学学报(社会科学版)2025,Vol.25Issue(6):117-135,19.
北京工业大学学报(社会科学版)2025,Vol.25Issue(6):117-135,19.DOI:10.12120/bjutskxb202506117

从"可解释"到"可信任":人工智能治理的逻辑重构

From"Explainable"to"Trustworthy":A Logical Reconstruction of Artificial Intelligence Governance

郭小东1

作者信息

  • 1. 浙江大学 光华法学院,浙江 杭州 310008
  • 折叠

摘要

Abstract

The rapid development of artificial intelligence technology,particularly the rise of large language models,has posed severe challenges to the traditional AI governance paradigm centered on"explainability".At the technical level,large models feature massive parameters,complex architectures,and emergent properties,making comprehensive explanation difficult to achieve.At the cognitive level,gaps exist between technical terminology and everyday language,and limited human cognitive capacity makes explanations difficult to understand effectively.At the practical level,explanations are often reduced to formalistic compliance tools,failing to address trust issues.Based on these challenges,the reconstruction of AI governance logic from"explainable"to"trustworthy"becomes inevitable.The"trustworthy"paradigm builds comprehensive trust in AI systems through multiple dimensions.In the technical dimension,it focuses on enhancing system robustness,verifiability,and security.In the value dimension,it aims to achieve alignment between AI and social ethical values.In the governance dimension,it emphasizes constructing an adaptive governance framework with classified and graded regulation,clear accountability,and multi-stakeholder collaboration.These three dimensions support each other,jointly forming a governance system for trustworthy AI.The"trustworthy"paradigm does not completely replace the"explainable"paradigm,but rather situates the latter within a broader trust-building system as an important means in specific contexts rather than a universal goal.This reconstruction reflects the deepening evolution of AI governance theory from a single technical orientation to an integrated technical-social-institutional perspective.It both acknowledges the objective existence of the"black box"nature of complex AI systems and actively explores feasible paths to establish multidimensional trust under such constraints,providing a more inclusive and flexible governance approach for addressing increasingly complex AI systems.

关键词

生成式人工智能/人工智能治理/可解释性/可信任性/价值对齐

Key words

generative artificial intelligence/artificial intelligence governance/interpretability/trustworthiness/value alignment

分类

政治法律

引用本文复制引用

郭小东..从"可解释"到"可信任":人工智能治理的逻辑重构[J].北京工业大学学报(社会科学版),2025,25(6):117-135,19.

基金项目

国家社会科学基金规划项目(23XFX004) (23XFX004)

北京工业大学学报(社会科学版)

OA北大核心

1671-0398

访问量0
|
下载量0
段落导航相关论文