Publication Date

Spring 2024

Degree Type

Master's Project

Degree Name

Master of Science in Computer Science (MSCS)

Department

Computer Science

First Advisor

Genya Ishigaki

Second Advisor

Sayma Akther

Third Advisor

William Andreopoulos

Keywords

Serverless Edge Computing, Container Caching, Deep Reinforcement Learning, Action Masking.

Abstract

Serverless edge computing is an emerging technology that realizes the low latency and resource-efficient function calls for responsive computing. In cloud-based serverless computing, it is a common practice to cache sufficiently many function containers for future reuse to reduce the overhead of container initiation. In contrast, the capacity limitation of edge nodes poses a complex problem to the caching strategy in serverless edge computing of selecting an appropriate set of container caches based on the request distribution. Deep Reinforcement Learning (DRL) can play a crucial role in optimizing the caching decisions under dynamic request arrivals. In this paper, we propose a scalable container caching agent based on DRL to improve the training efficiency and caching performance. We propose to apply action masking, which eliminates inauspicious caching actions from the DRL exploration and concentrates computing resources to more promising actions, to achieve scalability. Our proposal is evaluated through comparative analysis by simulations based on the metrics of latency, training time, and cache hit rate. Our results suggest that DRL with action masking can efficiently solve the combinatorial optimization of container caching and further infer the capability of DRL in solving similar combinatorial optimization problems in networking.

Available for download on Friday, May 23, 2025

Share

COinS