A system is considered in which agents (UAVs) must cooperatively discover interest-points (i.e., burning trees, geographical features) evolving over a grid. The objective is to locate as many interest-points as possible in the shortest possible time frame. There are two main problems: a control problem, where agents must collectively determine the optimal action, and a communication problem, where agents must share their local states and infer a common global state. Both problems become intractable when the number of agents is large. This survey/concept paper curates a broad selection of work in the literature pointing to a possible solution; a unified control/communication architecture within the framework of reinforcement learning. Two components of this architecture are locally interactive structure in the state-space, and hierarchical multi-level clustering for system-wide communication. The former mitigates the complexity of the control problem and the latter adapts to fundamental throughput constraints in wireless networks. The challenges of applying reinforcement learning to multi-agent systems are discussed. The role of clustering is explored in multi-agent communication. Research directions are suggested to unify these components.
belief propagation, clustering, computational complexity, distributed systems, mobile ad-hoc networks, multi-agent systems, reinforcement learning, scaling laws, tracking
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.
Jonathan Ponniah and Or D. Dantsker. "Strategies for Scaleable Communication and Coordination in Multi-Agent (UAV) Systems" Aerospace (2022). https://doi.org/10.3390/aerospace9090488