Author

Pooja Krishan

Publication Date

Spring 2024

Degree Type

Master's Project

Degree Name

Master of Science in Computer Science (MSCS)

Department

Computer Science

First Advisor

Saptarshi Sengupta

Second Advisor

Navrati Saxena

Third Advisor

Fabio di Troia

Keywords

adversarial attacks, multivariate Time-series forecasting

Abstract

The advent of deep learning models has revolutionized the industry over the past decade, leading to the widespread proliferation of smart devices and infrastructures. They play a crucial role in safety-critical applications like self-driving cars and medical image analysis, sustainable technologies like power consumption prediction, and in health monitoring tools to replace industrial equipment like hard disk drives, semiconductor chips, and lithium-ion batteries. But these indispensable deep learning models can be easily fooled to give incorrect predictions with utmost conviction, leading to catastrophic failures in applications where safety is of utmost importance, and resulting in the wastage of resources in other applications. The gullibility of deep learning models in image recognition is a widely-researched area, partly because the first attacked model was a Convolutional Neural Network (CNN). In this research, we enumerate the extent of the impact of adversarial attacks primarily used to poison images, in time-series forecasting tasks, and then propose two ways to tackle them by making the model more robust. More specifically, we use two white-box attacks - the Fast Gradient Sign Method (FGSM) and the Basic Iterative Method (BIM) to perform poisoning of the training samples, successfully throwing off the model, and causing it to predict the wrong output with utmost conviction. We also visualize the imperceptible changes to the input after the attack that makes detection of an adversarial attack using the naked eye quite difficult. After demonstrating the success of the attacks and how easily deep learning models succumb to them, we conduct two types of adversarial defense techniques – data augmentation and layer-wise adversarial hardening to make the deep learning models more robust and resilient to these types of attacks. Finally, we demonstrate the transferability of the attacks

and defense mechanisms by extending our work on the toy dataset to predict the power consumed over a period to a much larger dataset that is used to predict the time-to-failure of hard disk drives to aid maintenance teams in their operations. Experimental results indicate that the attacks and defense schemes work as intended resulting in a satisfactory decrease in error rates after carrying out the adversarial defenses.

Available for download on Thursday, May 22, 2025

Share

COinS