Publication Date

Spring 2023

Degree Type

Master's Project

Degree Name

Master of Science (MS)

Department

Computer Science

First Advisor

Genya Ishigaki

Second Advisor

Faranak Abri

Third Advisor

Fabio Di Troia

Keywords

edge computing, explainable AI, caching strategies

Abstract

Serverless edge computing environments use lightweight containers to run IoT services on a need basis i.e only when a service is requested. These containers experience a cold start up latency when starting up. One probable solution to reduce the startup delay is container caching on the edge nodes. Edge nodes are nodes that are closer in proximity to the IoT devices. Efficient container caching strategies are required since the resource availability on these edge devices is limited. Because of this constraint on resources, the container caching strategies should also take proper resource utilization into account. This project tries to further improve the startup latency and provides explanations of a resulting caching strategy by using Explainable Reinforcement Learning (XRL). This project proposes a method that uses Deep Reinforcement Learning (DRL) to learn a policy to efficiently perform container caching by maximizing the expected cumulative discounted reward. The proposed method uses a probability distribution function to distribute IoT service requests to the other edge devices by taking advantage of the heterogeneous and distributed character of edge computing environments. An XRL method called the action-influence model will be integrated into the DRL framework so that a service provider of the edge computing service can understand the logic behind the caching and further improve the performance. A comprehensive analysis of the impact of caching decisions on overall performance of the DRL algorithm is provided.

Available for download on Friday, May 24, 2024

Share

COinS