Author

Kush Patel

Publication Date

Spring 2025

Degree Type

Master's Project

Degree Name

Master of Science in Computer Science (MSCS)

Department

Computer Science

First Advisor

Saptarshi Sengupta

Second Advisor

William Andreopoulos

Third Advisor

Thomas Austin

Keywords

Time series forecasting, transformer models, dynamic attention, adversarial robustness, FGSM attack, BIM attack

Abstract

Transformer architectures have emerged as powerful tools for time series forecasting, excelling at capturing complex temporal dependencies across multivariate inputs. However, these models are highly susceptible to adversarial attacks such as the Fast Gradient Sign Method (FGSM) and Basic Iterative Method (BIM), which can significantly degrade predictive performance through small, targeted perturbations. This work integrates dynamic attention mechanisms, adaptive masking modules that introduce controlled variability into attention pathways, into a transformer forecasting model to enhance robustness against such attacks. Using two distinct datasets, we compare the performance of a standard transformer and a dynamic attention-enhanced transformer under both clean and adversarial conditions. Results show that while both models perform similarly on non-attacked data, the dynamic attention model consistently maintains lower error rates under increasing adversarial intensities, demonstrating improved resilience without the need for adversarial training or additional defense layers. These findings highlight the effectiveness of dynamic architectural defenses as lightweight, model-level strategies for improving the robustness and reliability of deep learning systems in time series forecasting applications.

Available for download on Monday, May 25, 2026

Share

COinS