|国家科技期刊平台
首页|期刊导航|同济大学学报(自然科学版)|基于多智能体深度强化学习的高速公路可变限速协同控制方法

基于多智能体深度强化学习的高速公路可变限速协同控制方法OA北大核心CSTPCD

Coordinated Variable Speed Limit Control for Freeway Based on Multi-Agent Deep Reinforcement Learning

中文摘要英文摘要

面向高速公路多路段可变限速协同控制需求,针对高维参数空间高效训练寻优难题,提出了应用多智能体深度确定性策略梯度(MADDPG)算法的高速公路可变限速协同控制方法.区别于既有研究的单个智能体深度确定性策略梯度(DDPG)算法,MADDPG将每个管控单元抽象为具备Actor-Critic强化学习架构的智能体,在算法训练过程中共享各智能体的状态、动作信息,使得各智能体具备推测其余智能体控制策略的能力,进而实现多路段协同控制.基于开源仿真软件SUMO,在高速公路典型拥堵场景对提出的控制方法开展管控效果验证.实验结果表明,提出的MADDPG算法降低了拥堵持续时间和路段运行速度标准差,分别减少69.23%、47.96%,可显著提高交通效率与安全.对比单智能体DDPG算法,MADDPG可节约50%的训练时间并提高7.44%的累计回报值,多智能体算法可提升协同控制策略的优化效率.进一步,为验证智能体间共享信息的必要性,将MADDPG与独立多智能体DDPG(IDDPG)算法进行对比:相较于IDDPG,MADDPG可使拥堵持续时间、速度标准差均值的改善提升11.65%、19.00%.

In order to meet the needs of coordinated variable speed limit(VSL)control of multi-segment on freeways,and to solve the problem of efficient training optimization in high-dimensional parameter space,a multi-agent deep deterministic policy gradient(MADDPG)algorithm is proposed for freeway VSL control.Different from the existing research on the single agent Deep Deterministic Policy Gradient(DDPG)algorithm,MADDPG abstracts each control unit as an agent with Actor-Critic reinforcement learning architecture,and shares each agent in the algorithm training process.The state and action information of the agents enable each agent to have the ability to infer the control strategies of other agents,thereby realizing multi-segment coordinated control.Based on the open source simulation software SUMO,the effect of the control method proposed is verified in a typical freeway traffic jam scenario.The experimental results show that the MADDPG algorithm proposed reduces the traffic jam duration and the speed standard deviation by 69.23%and 47.96%respectively,which can significantly improve the traffic efficiency and safety.Compared with the single-agent DDPG algorithm,MADDPG can save 50%of the training time and increase the cumulative return value by 7.44%.The multi-agent algorithm can improve the optimization efficiency of the collaborative control strategy.Further,in order to verify the necessity of sharing information among agents,MADDPG is compared with the independent DDPG(IDDPG)algorithm:It is shown that MADDPG can improve the traffic jam duration and speed standard deviation by 11.65%,19.00%respectively.

余荣杰;徐灵;章锐辞

同济大学 道路与交通工程教育部重点实验室,上海 201804浙江杭绍甬高速公路有限公司,浙江 杭州 310000

交通运输

交通工程可变限速协同控制多智能体深度强化学习交通拥堵高速公路交通效率交通安全

traffic engineeringcoordinated variable speed limit controlmulti-agent deep reinforcement learningtraffic jamfreewaytraffic efficiencytraffic safety

《同济大学学报(自然科学版)》 2024 (007)

1089-1098 / 10

浙江省交通运输厅科技计划项目(2021047)

10.11908/j.issn.0253-374x.22441

评论