Publication Date

Spring 2025

Degree Type

Master's Project

Degree Name

Master of Science in Computer Science (MSCS)

Department

Computer Science

First Advisor

Robert Chun

Second Advisor

William Andreopoulos

Third Advisor

Ganesh Regoti

Keywords

Large Language Models (LLMs), Parameter Efficient Fine-Tuning (PEFT), QLoRA, Graph Based Retrieval, LightRAG, Incremental Knowledge Updates.

Abstract

The exponential increase in medical data has created a greater demand for precise and efficient information retrieval systems. Existing Large Language Models (LLMs) face domain-specific difficulties such as sophisticated medical jargon, situational comprehension, and the continual advancement of healthcare knowledge. To tackle these challenges, we present MediLightRAG, an innovative two-stage system which integrates parameter-efficient fine-tuning of Large Language models with LightRAG’s graph-based retrieval. The first stage focuses on enabling accurate resource-efficient model adaptation for the medical domain through QLoRA fine-tuning. In the second stage, LightRAG’s two-tiered retrieval architecture that combines graph-based indexing with dynamic knowledge retrieval is employed to enhance information coverage and precision. MediLightRAG’s incremental knowledge update feature allows newer medical research to be seamlessly integrated. Using sophisticated retrieval techniques alongside compute-efficient tuning, this new system greatly enhances the accuracy of medical query responses and clinical decision-making tasks.

Available for download on Monday, May 25, 2026

Share

COinS