A reinforcement learning-based path planning for collaborative UAVs
Publication Date
4-25-2022
Document Type
Conference Proceeding
Publication Title
Proceedings of the ACM Symposium on Applied Computing
DOI
10.1145/3477314.3507052
First Page
1938
Last Page
1943
Abstract
Unmanned Aerial Vehicles (UAVs) are widely used in search and rescue missions for unknown environments, where maximized coverage for unknown devices is required. This paper considers using collaborative UAVs (Col-UAV) to execute such tasks. It proposes to plan efficient trajectories for multiple UAVs to collaboratively maximize the number of devices to cover within minimized flying time. The proposed reinforcement learning (RL)-based Col-UAV scheme lets all UAVs share their traveling information by maintaining a common Q-table, which reduces the overall time and the memory complexities. We simulate the proposed RL Col-UAV scheme under various simulation environments with different grid sizes and compare the performance with other baselines. The simulation results show that the RL Col-UAVs scheme can find the optimal number of UAVs required to deploy for the diverse simulation environment and outperforms its counterparts in finding a maximum number of devices in a minimum time.
Funding Number
2020R1I1A3072688
Funding Sponsor
National Research Foundation of Korea
Keywords
collaborative UAVs, path planning, reinforcement learning, unmanned aerial vehicle (UAV)
Department
Applied Data Science
Recommended Citation
Shahnila Rahim, Mian Muaz Razaq, Shih Yu Chang, and Limei Peng. "A reinforcement learning-based path planning for collaborative UAVs" Proceedings of the ACM Symposium on Applied Computing (2022): 1938-1943. https://doi.org/10.1145/3477314.3507052