Journal of Applied Sciences ›› 2024, Vol. 42 ›› Issue (2): 200-210.doi: 10.3969/j.issn.0255-8297.2024.02.002

• Communication Engineering • Previous Articles     Next Articles

UAV Path Planning and Radio Mapping Based on Deep Reinforcement Learning

WANG Xin1, ZHONG Weizhi1, WANG Junzhi1, XIAO Lijun1, ZHU Qiuming2   

  1. 1. College of Astronautics, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, Jiangsu, China;
    2. College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, Jiangsu, China
  • Received:2022-06-22 Online:2024-03-31 Published:2024-03-28

Abstract: To address the limitations of traditional UAV trajectory optimization design methods in building communication models, this paper presents a deep reinforcement learning-based UAV path planning and radio mapping in cellular-connected UAV communication systems. The proposed method utilizes an extended double-deep Q-network (DDQN) model combined with a radio prediction network to generate UAV trajectories and predict the reward values accumulated due to action selection. Furthermore, the method trains the DDQN model by combining actual and simulated flights based on Dyna framework, which greatly improves the learning efficiency. Simulation results show that the proposed method utilizes the learned coverage area probability map more effectively compared to the Direct-RL algorithm, enabling the UAV to avoid weak coverage areas and reducing the weighted sum of flight time and expected interruption time.

Key words: UAV cellular communication, path planning, deep reinforcement learning, radio mapping

CLC Number: