Publication Date
Spring 2025
Degree Type
Master's Project
Degree Name
Master of Science in Computer Science (MSCS)
Department
Computer Science
First Advisor
Saptarshi Sengupta
Second Advisor
Maryam Khazaei Pool
Third Advisor
Kushagra Bhargava
Keywords
Adversarial Robustness, Quantization, LSTM, Time-Series Forecasting, Post-Training Quantization, Model Compression
Abstract
Real-world edge applications now use modern machine learning models which require both resource efficiency and robustness against adversarial threats. Deep neural networks which include time series forecasting models still face risks from adversarial perturbations while quantization techniques used for memory and compute efficiency create unpredictable robustness challenges. This project investigates the adversarial resistance of Long Short-Term Memory (LSTM) models after applying post-training quantization at three different precision levels: 16-bit floating point (FP16), 8-bit integer (INT8) and custom 4-bit quantization. The Jena Climate dataset serves as our main benchmark for training a fullprecision LSTM model followed by multiple quantization strategies which include TensorFlow Lite converters and a custom 4-bit quantization method based on ACIQ techniques that use clipping and scaling with bias correction. The evaluation of all models occurs under clean and adversarial conditions through Fast Gradient Sign Method (FGSM) and Basic Iterative Method (BIM) attacks with various perturbation strengths. The results indicate that quantized models show minor accuracy losses in clean conditions yet 4-bit models demonstrate better resistance to adversarial attacks. We create non-retraining defense strategies to enhance robustness which include input clipping and temporal median filtering and Gaussian smoothing and feature squeezing. These lightweight defenses effectively restore performance after an attack by reducing adversarial degradation without requiring modifications to model parameters. The complete pipeline is validated on the Jena Climate dataset and further tested on two additional multivariate datasets—Beijing PM2.5 and Appliances Energy Prediction—demonstrating consistent robustness patterns across data types. Our research demonstrates essential trade-offs between compression and efficiency and robustness which provides useful guidance for creating secure time-series models suitable for edge AI deployment
Recommended Citation
Arora, Maanak, "On the Adversarial Robustness of Quantized Neural Networks Against Common Adversaries in Time-Series Forecasting" (2025). Master's Projects. 1521.
DOI: https://doi.org/10.31979/etd.vg4n-y8x2
https://scholarworks.sjsu.edu/etd_projects/1521