Publication Date

Spring 2021

Degree Type

Master's Project

Degree Name

Master of Science in Computer Science (MSCS)

Department

Computer Science

First Advisor

Christopher J Pollett

Second Advisor

Robert Chun

Third Advisor

Shruti Kothari

Keywords

Unity, Convolutional Neural Network (CNN), American Sign Language(ASL)

Abstract

Our implementation of a prototype computer vision system to help the deaf and mute
communicate in a shopping setting. Our system uses live video feeds to recognize American Sign Language (ASL) gestures and notify shop clerks of deaf and mute patrons’ intents. It generates a video dataset in the Unity Game Engine of 3D humanoid models in a shop setting performing ASL signs. Our system uses OpenPose to detect and recognize the bone points of the human body
from the live feed. The system then represents the motion sequences as high dimensional skeleton joint point trajectories followed by a time-warping technique to generate a temporal RGB image using the Seq2Im technique. This image is then fed to the image classification algorithms that classify the gesture performed to the shop clerk. We carried out experiments to analyze the performance of this methodology on the Leap Motion Controller dataset and NTU RGB+D dataset using the SVM and LeNet-5 models. We also tested 3D vs 2D bone point dataset performance and found 90% accuracy for the 2D skeleton dataset.

Share

COinS