Publication Date

Spring 2023

Degree Type

Master's Project

Degree Name

Master of Science (MS)


Computer Science

First Advisor

Ching-seh Wu

Second Advisor

William Andreopoulos

Third Advisor

Mohit Gupta


Ensemble models, eye-tracking, human visuals, image-classification, Hybrid model, Decision Tree model, Extra-Tree classifier, XGBoost model, Gradient Boosting model, DC-GANs


With the advances in technology, image classification has become one of the core areas of interest for researchers in the field of computer vision. We, humans, experience great levels of visuals in our day-to-day lives. The human eye is a powerful tool that not only lets us capture images around us but also aids in remembering, distinguishing, and interpreting these visuals. Comprehending the images that the user perceives is an important application in the fields of artificial intelligence, smart security systems, and areas of virtual reality. Recent advances in machine learning and neural networks have led to more precise and efficient methods for identifying and categorizing visual data. In this project, we focus on the classification of images into different categories by running various ensemble machine-learning models like Decision Tree model, Extra-Tree classifier, XGBoost model and Gradient Boosting model. Our primary goal will be to emphasize two kinds of input data numerical eye-tracking data and image data. Furthermore, we train Deep Convolutional Generative Adversarial Network (DCGAN) to generate new images that resembles our input image dataset. This is particularly to augment our dataset and create images for our image dataset in order to enhance the performance of our ensemble models. A comparative thorough analysis is then performed with our current ensemble models and the previous as well as current literature. Our experiments conclude that for image data classification, our proposed model performs better after data augmentation and for classification using eye- tracking numerical data Decision Tree performs than all the models implemented.

Available for download on Friday, May 24, 2024