Publication Date
Spring 2022
Degree Type
Master's Project
Degree Name
Master of Science (MS)
Department
Computer Science
First Advisor
Mark Stamp
Second Advisor
Fabio Di Troia
Third Advisor
Teng Moh
Keywords
momemtum, Hidden Markov Models
Abstract
Momentum is a popular technique for improving convergence rates during gradient descent. In this research, we experiment with adding momentum to the Baum-Welch expectation-maximization algorithm for training Hidden Markov Models. We compare discrete Hidden Markov Models trained with and without momentum on English text and malware opcode data. The effectiveness of momentum is determined by measuring the changes in model score and classification accuracy due to momentum. Experiments indicate that adding momentum to Baum-Welch can reduce the number of iterations required for initial convergence during HMM training, particularly in cases where the model is slow to converge. However, momentum does not seem to improve final model performance at higher numbers of iterations.
Recommended Citation
Miller, Andrew, "Hidden Markov Models with Momentum" (2022). Master's Projects. 1085.
DOI: https://doi.org/10.31979/etd.te3f-knqj
https://scholarworks.sjsu.edu/etd_projects/1085