Publication Date

Spring 2023

Degree Type

Master's Project

Degree Name

Master of Science (MS)

Department

Computer Science

First Advisor

Mark Stamp

Second Advisor

Fabio Di Troia

Third Advisor

Genya Ishigaki

Keywords

steganography, machine learning models

Abstract

As machine learning and deep learning models become ubiquitous, it is inevitable that there will be attempts to exploit such models in various attack scenarios. For example, in a steganographic based attack, information would be hidden in a learning model, which might then be used to gain unauthorized access to a computer, or for other malicious purposes. In this research, we determine the steganographic capacity of various classic machine learning and deep learning models. Specifically, we determine the number of low-order bits of the trained parameters of a given model that can be altered without significantly affecting the performance of the model. We find that the steganographic capacity of learning models is surprisingly high, and that there tends to be a clear threshold after which model performance rapidly degrades.

Available for download on Sunday, May 26, 2024

Share

COinS