| 注册
首页|期刊导航|计算机应用研究|基于深度强化学习的电子政务云动态化任务调度方法

基于深度强化学习的电子政务云动态化任务调度方法

龙宇杰 修熙 黄庆 黄晓勉 李莹 吴维刚

计算机应用研究2024,Vol.41Issue(6):1797-1802,6.
计算机应用研究2024,Vol.41Issue(6):1797-1802,6.DOI:10.19734/j.issn.1001-3695.2023.10.0527

基于深度强化学习的电子政务云动态化任务调度方法

Scheduling of dynamic tasks in e-government clouds using deep reinforcement learning

龙宇杰 1修熙 1黄庆 2黄晓勉 3李莹 4吴维刚1

作者信息

  • 1. 中山大学计算机学院,广州 510006
  • 2. 广州市数字政府运营中心,广州 510635
  • 3. 广东亿迅科技有限公司,广州 510635
  • 4. 广州市品高软件股份有限公司,广州 510663
  • 折叠

摘要

Abstract

The task scheduling of e-government cloud center has always been a complex problem.Most existing task schedu-ling solutions rely on expert knowledge and are not versatile enough to deal with dynamic cloud environment,which often leads to low resource utilization and degradation of quality-of-service,resulting in longer makespan.To address this issue,this paper proposed a deep reinforcement learning(DRL)scheduling algorithm based on the actor-critic(A2C)mechanism.Firstly,the actor network parameterized the policy and chose scheduling actions based on the current system state,while the critic network assigned scores to the current system state.Then,it updated the actor policy network using gradient ascent,utilizing the scores from the critic network to determine the effectiveness of actions.Finally,it conducted simulation experiments using real data from production datacenters.The results show that this method can improve resource utilization in cloud datacenters and reduce the makespan in comparison to the classic policy gradient algorithm and five commonly used heuristic task scheduling methods.This evidence suggests that the proposed method is superiorly adapted for the dynamic e-government clouds.

关键词

电子政务/云计算/任务调度/深度强化学习/演员评论家算法

Key words

e-government/cloud computing/task scheduling/deep reinforcement learning/actor-critic

分类

信息技术与安全科学

引用本文复制引用

龙宇杰,修熙,黄庆,黄晓勉,李莹,吴维刚..基于深度强化学习的电子政务云动态化任务调度方法[J].计算机应用研究,2024,41(6):1797-1802,6.

计算机应用研究

OA北大核心CSTPCD

1001-3695

访问量7
|
下载量0
段落导航相关论文