改进行为克隆与DDPG的无人驾驶决策模型OA北大核心CSTPCD
Improved Behavioral Cloning and DDPG's Driverless Decision Model
无人驾驶技术的关键是决策层根据感知环节输入信息做出准确指令.强化学习和模仿学习比传统规则更适用于复杂场景.但以行为克隆为代表的模仿学习存在复合误差问题,使用优先经验回放算法对行为克隆进行改进,提升模型对演示数据集的拟合能力;原DDPG(deep deterministic policy gradient)算法存在探索效率低下问题,使用经验池分离以及随机网络蒸馏技术(random network distillation,RND)对DDPG算法进行改进,提升DDPG算法训练效率.使用改进后的算法进行联合训练,减少DDPG训练前期的无用探索.通过TORCS(the open racing car simulator)仿真平台验证,实验结果表明该方法在相同的训练次数内,能够探索出更稳定的道路保持、速度保持和避障能力.
The key to driverless technology is that the decision-making level makes accurate instructions based on the input information of the perception link.Reinforcement learning and imitation learning are better suited for complex scenarios than traditional rules.However,the imitation learning represented by behavioral cloning has the problem of composite error,and this paper uses the priority empirical playback algorithm to improve the behavioral cloning to improve the fitting ability of the model to the demo dataset.The original DDPG(deep deterministic policy gradient)algorithm has the problem of low exploration efficiency,and the experience pool separation and random network distillation(RND)technology are used to improve the DDPG algorithm and improve the training efficiency of DDPG algorithm.The improved algorithm is used for joint training to reduce the useless exploration in the early stage of DDPG training.Verified by TORC(the open racing car simulator)simulation platform,the experimental results show that the proposed method can explore more stable road maintenance,speed maintenance and obstacle avoidance ability within the same number of training times.
李伟东;黄振柱;何精武;马草原;葛程
大连理工大学汽车工程学院,辽宁大连 116024
计算机与自动化
无人驾驶强化学习模仿学习决策算法TORCS
unmanned drivingstrengthen learningimitate learningdecision algorithmthe open racing car simulator(TORCS)
《计算机工程与应用》 2024 (014)
86-95 / 10
辽宁省科技创新重大专项(ZX20220560).
评论