西安交通大学学报(社会科学版)2025,Vol.45Issue(2):89-98,10.DOI:10.15896/j.xjtuskxb.202502008
生成式大模型沿人工智能价值链的风险共治
Risk Co-Governance of Large Generative Models along the AI Value Chain
摘要
Abstract
Since the release of ChatGPT in 2022,large models have set off a new wave of generative AI development and opened the prelude to the journey towards artificial general intelligence(AGI).Since large models drive the generative AI value chain,and the risks of their upstream training are transmitted downstream,their downstream deployment also trigger a series of risks,the risks they cause exist throughout the entire AI value chain.Governing large models risks must rely on the co-governance of stakeholders upstream and downstream along the AI value chain,which include providers and deployers of large models,and providers,users and affected persons of AI systems based on large models.These stakeholders have complex dependencies and different levels of control over large models and the AI systems based on them,which jointly determine the risk assessment and response of large models. The EU and China are at the forefront in exploring legislation for large models,and have already formulated some provisions for risk governance of large models along the AI value chain.But such provisions at present only focus on regulating stakeholders in a certain link of the AI value chain,which are not enough to fully deal with the risks of the entire chain brought by large models.The EU Artificial Intelligence Act sets a two-tier progressive obligation system for general-purpose AI models,and regulates general-purpose AI systems based on risk classification,but only assigns the responsibility of assessing and mitigating risks of the entire value chain to upstream providers of general-purpose AI models with systemic risk.China recently issued the Interim Measures for the Administration of Generative Artificial Intelligence Services.This legislation mainly regulates generative artificial intelligence service providers downstream of the value chain,but does not regulate pure providers of large models,so it only has a limited control of the risks of large models.To fully address the risks of large models,it is necessary to further explore legislations for risk co-governance among multiple stakeholders along the AI value chain. Although existing research has distinguished different stakeholders in the upstream and downstream of the AI value chain,and proposed that upstream developers have obligations such as transparency,which is conducive to risk communication between upstream and downstream stakeholders,there is still a lack of research on how upstream and downstream stakeholders of large models can collaborate in risk assessment and risk response.This article focuses on the risk co-governance of large models throughout the entire AI value chain among upstream and downstream stakeholders along the value chain,and points out that such risk co-governance of large models along the AI value chain requires the establishment of a multi-party collaborative mechanism for large model risk governance based on the roles and capabilities of stakeholders in the value chain.Specifically,it is necessary to clarify the information provision obligations of large model providers including necessary regulations on the release of large models under an open-source licence,establish a multi-party risk assessment mechanism for the deployment of large models for high-risk purposes,establish a transparency system for AI systems based on large models,and improve the co-governance mechanism for risk monitoring and risk response.关键词
生成式人工智能/大模型/价值链/利益相关者/风险规制/人工智能法治Key words
generative AI/large models/value chain/stakeholders/risk regulation/artificial intelligence legislation分类
政治法律引用本文复制引用
刘金瑞..生成式大模型沿人工智能价值链的风险共治[J].西安交通大学学报(社会科学版),2025,45(2):89-98,10.基金项目
中国法学会部级法学研究课题[CLS(2024)C17]. (2024)