Publication Date

Spring 2024

Degree Type

Master's Project

Degree Name

Master of Science in Computer Science (MSCS)

Department

Computer Science

First Advisor

Nada Attar

Second Advisor

Saptarshi Sengupta

Third Advisor

Milind Bhusari

Keywords

Gender Classification, Convolution Neural Network [CNN], Computer Vision, PiFUHD, CLIP.

Abstract

In computer vision, gender classification has become a vital task having applications in human-computer interaction, healthcare, and surveillance. In this study, we look at a two-step approach based on human joint information for gender classification. In this research, we use convolutional neural networks (CNNs).

With Leeds Sports Pose (LSP) dataset, we use a C5 pre-trained model to map and extract joint information from 2D RGB images and after pre-processing and background removal, we use PiFUHD to transform these 2D images into 3D representations. Next, we train our models on RGB images and joint images for both 2D and 3D representations.

An empirical examination of our method produced encouraging results, with the accuracy reaching 80.8% solely on the joint images. Our results show the potential of using both 2D and 3D representations of images for gender classification tasks in addition to using cutting edge methods like CLIP and Tiramisu. This study offers insightful information for future developments in computer vision methods for applications that focus on people.

Available for download on Sunday, May 25, 2025

Share

COinS