Publication Date

Fall 2024

Degree Type

Master's Project

Degree Name

Master of Science in Computer Science (MSCS)

Department

Computer Science

First Advisor

Mark Stamp

Second Advisor

Katerina Potika

Third Advisor

Mike Wu

Keywords

Adversarial Attacks, Federated Learning

Abstract

In this paper, we experimentally analyze the susceptibility of selected Federated Learning (FL) systems to the presence of adversarial clients. We find that temporal attacks significantly affect model performance in FL, especially when the adversaries are active throughout and during the ending rounds of the FL process. Machine Learning models like Multinominal Logistic Regression, Support Vector Classifier (SVC), Neural Network models like Multilayer Perceptron (MLP), Convolution Neural Network (CNN), Recurrent Neural Network (RNN), Long Short-Term Memory (LSTM) and tree-based machine learning models like Random Forest and XGBoost were considered. These results highlight the effectiveness of temporal attacks and the need to develop strategies to make the FL process more robust. We also explore defense mechanisms, including outlier detection in the aggregation algorithm.

Share

COinS