A reinforcement learning-based path planning for collaborative UAVs
Proceedings of the ACM Symposium on Applied Computing
Unmanned Aerial Vehicles (UAVs) are widely used in search and rescue missions for unknown environments, where maximized coverage for unknown devices is required. This paper considers using collaborative UAVs (Col-UAV) to execute such tasks. It proposes to plan efficient trajectories for multiple UAVs to collaboratively maximize the number of devices to cover within minimized flying time. The proposed reinforcement learning (RL)-based Col-UAV scheme lets all UAVs share their traveling information by maintaining a common Q-table, which reduces the overall time and the memory complexities. We simulate the proposed RL Col-UAV scheme under various simulation environments with different grid sizes and compare the performance with other baselines. The simulation results show that the RL Col-UAVs scheme can find the optimal number of UAVs required to deploy for the diverse simulation environment and outperforms its counterparts in finding a maximum number of devices in a minimum time.
National Research Foundation of Korea
collaborative UAVs, path planning, reinforcement learning, unmanned aerial vehicle (UAV)
Applied Data Science
Shahnila Rahim, Mian Muaz Razaq, Shih Yu Chang, and Limei Peng. "A reinforcement learning-based path planning for collaborative UAVs" Proceedings of the ACM Symposium on Applied Computing (2022): 1938-1943. https://doi.org/10.1145/3477314.3507052