Scaling Container Caching to Larger Networks with Multi-Agent Reinforcement Learning
Publication Date
1-1-2024
Document Type
Conference Proceeding
Publication Title
Proceedings - International Conference on Computer Communications and Networks, ICCCN
DOI
10.1109/ICCCN61486.2024.10637588
Abstract
The development of containers as a tool for scalable computing has led to the increased use of the serverless edge computing paradigm. In particular, containers enable fast deployment of services to edge networks, allowing users to access spatially closer servers and resulting in lower latency. As instantiating containers can cause extra delay called a cold start delay, container caching, which keeps used containers alive in memory for reuse by the next user, has been proposed to mitigate the delay. However, such edge servers are constrained in their memory capacity, and there is a need for an efficient caching strategy. In this paper, we demonstrate that Multi-Agent Reinforcement Learning (MARL) can effectively learn a cache replacement policy and outperform traditional heuristic and centralized Deep Reinforcement Learning (DRL) algorithms. Our results also show that the proposed MARL's smaller action space has a significant advantage over DRL, which requires full information about the network. In particular, our research demonstrates that MARL is able to scale to larger networks without significant sacrifices to performance.
Keywords
Container Caching, Multi-Agent Reinforcement Learning, Serverless Edge Computing (SEC)
Department
Computer Science
Recommended Citation
Austin Chen and Genya Ishigaki. "Scaling Container Caching to Larger Networks with Multi-Agent Reinforcement Learning" Proceedings - International Conference on Computer Communications and Networks, ICCCN (2024). https://doi.org/10.1109/ICCCN61486.2024.10637588