Publication Date

Spring 2024

Degree Type

Master's Project

Degree Name

Master of Science in Computer Science (MSCS)


Computer Science

First Advisor

Navrati Saxena

Second Advisor

Fabio Di Troia

Third Advisor

Aneesh Verma


Chest X-ray, COVID-19, pneumonia, Convolutional neural network, Transfer learning, Feature extraction, Graphical neural network, Graph convolutional network, Graph attention network


In clinical practice, it is still difficult to accurately diagnose lung diseases from chest X-ray (CXR) images. In this study, we propose a new hybrid method for identifying several types of lung diseases using CXR images by combining Convolutional Neural Networks (CNNs) with Graph Neural Networks (GNNs). The framework of our proposed methodology takes advantage of CNN’s ability to extract detailed visual features and GNN’s capacity to understand complex relationships between these features, enabling comprehensive analysis in a multi-class classification setting of COVID-19, pneumonia, and normal lung conditions. We used multiple transfer learning models such as DenseNet201, VGG16, VGG19, MobileNetV2, and ResNet50 as backbones for extracting rich feature representations from the CXR images. We used K-Means clustering on the extracted CNN feature vectors to transform them into a graph structure in which clusters were nodes with edges representing similarity based on cluster association for images sharing similar pathological features. Instead of selecting a single CNN model, we experimented with various CNN-GNN hybrid combinations to capture local visual characteristics as well as global relational patterns. As a part of our GNN model, we have developed a Graph Convolutional Network (GCN) hybridized with the Graph Attention Network (GAT). This GCN-GAT hybrid model used the graph structure to uncover complex patterns by analyzing the connections between features to perform lung disease classification. Our best performing hybrid CNN-GNN model delivered impressive performance metrics, with an accuracy of 98.53%, an F1 score of 98.52%, an ROC-AUC of 99.81%, a recall of 98.53%, and a specificity of 99.26% in the multi-class classification task.

Available for download on Friday, May 23, 2025