Publication Date
1-1-2022
Document Type
Conference Proceeding
Publication Title
Proceedings of the World Congress on Electrical Engineering and Computer Systems and Science
DOI
10.11159/mhci22.110
Abstract
Our eyes actively perform tasks including, but not limited to, searching, comparing, and counting. This includes tasks in front of a computer, whether it be trivial activities like reading email, or video gaming, or more serious activities like drone management, or flight simulation. Understanding what type of visual task is being performed is important to develop intelligent user interfaces. In this work, we investigated standard machine and deep learning methods to identify the task type using eye-tracking data-including both raw numerical data and the visual representations of the user gaze scan paths and pupil size. To this end, we experimented with computer vision algorithms such as Convolutional Neural Networks (CNNs) and compared the results to classic machine learning algorithms. We found that Machine learning-based methods performed with high accuracy classifying tasks that involve minimal visual search, while CNNs techniques do better in situations where visual search task is included.
Keywords
CNN, Eye Tracking, Machine Learning, Vision, Visual search
Department
Computer Science
Recommended Citation
Devangi Vilas Chinchankarame, Noha Elfiky, and Nada Attar. "Visual Task Classification using Classic Machine Learning and CNNs" Proceedings of the World Congress on Electrical Engineering and Computer Systems and Science (2022). https://doi.org/10.11159/mhci22.110
Comments
This is the Version of Record and can also be read online here.