Publication Date


Document Type

Conference Proceeding

Publication Title

Proceedings of the World Congress on Electrical Engineering and Computer Systems and Science




Our eyes actively perform tasks including, but not limited to, searching, comparing, and counting. This includes tasks in front of a computer, whether it be trivial activities like reading email, or video gaming, or more serious activities like drone management, or flight simulation. Understanding what type of visual task is being performed is important to develop intelligent user interfaces. In this work, we investigated standard machine and deep learning methods to identify the task type using eye-tracking data-including both raw numerical data and the visual representations of the user gaze scan paths and pupil size. To this end, we experimented with computer vision algorithms such as Convolutional Neural Networks (CNNs) and compared the results to classic machine learning algorithms. We found that Machine learning-based methods performed with high accuracy classifying tasks that involve minimal visual search, while CNNs techniques do better in situations where visual search task is included.


CNN, Eye Tracking, Machine Learning, Vision, Visual search


This is the Version of Record and can also be read online here.


Computer Science