通信学报2024,Vol.45Issue(5):90-100,11.DOI:10.11959/j.issn.1000-436x.2024101
基于拟牛顿法的深度强化学习在车联网边缘计算中的研究
Research on deep reinforcement learning in Internet of vehicles edge computing based on Quasi-Newton method
摘要
Abstract
To address the issues of ineffective task offloading decisions caused by multitasking and resource constraints in vehicular networks,the Quasi-Newton method deep reinforcement learning dual-phase online offloading(QNRLO)al-gorithm was proposed.The algorithm was designed by initially incorporating batch normalization techniques to optimize the training process of deep neural networks.Subsequently,optimization was performed using the Quasi-Newton method to effectively approximate the optimal solution.Through this dual-stage optimization,performance was significantly en-hanced under conditions of multitasking and dynamic wireless channels,improving computational efficiency.By intro-ducing Lagrange multipliers and a reconstructed dual function,the non-convex optimization problem was transformed into a convex optimization problem of the dual function,ensuring the global optimality of the algorithm.Additionally,system transmission time allocation in the vehicular network model was considered,enhancing the practicality of the al-gorithm.Compared to existing algorithms,the proposed algorithm improves the convergence and stability of task offloading significantly,addresses task offloading issues in vehicular networks effectively,and offers high practicality and reliability.关键词
车联网/任务卸载/深度强化学习/拟牛顿法Key words
Internet of vehicles/task offloading/deep reinforcement learning/Quasi-Newton method分类
信息技术与安全科学引用本文复制引用
章坚武,芦泽韬,章谦骅,詹明..基于拟牛顿法的深度强化学习在车联网边缘计算中的研究[J].通信学报,2024,45(5):90-100,11.基金项目
浙江省自然科学基金重点项目(No.LZ23F010001) Key Program of Zhejiang Provincial Natural Science Foundation (No.LZ23F010001) (No.LZ23F010001)