Publication Date

Spring 2025

Degree Type

Master's Project

Degree Name

Master of Science in Computer Science (MSCS)

Department

Computer Science

First Advisor

Melody Moh

Second Advisor

Teng-Sheng Moh

Third Advisor

Vidya Rangasayee

Keywords

Code Review Automation, Large Language Models, LoRA, QLoRA, Adaptive QLoRA, Retrieval Augmented Generation

Abstract

In this technological era where Artificial Intelligence and Machine Learning are revolutionizing various domains, Large Language Models (LLMs) are emerging as a very powerful tool. In the software development lifecycle, it is extremely important to have reliable code reviews to ensure security and maintain code quality. This project aims to survey various existing methodologies to aid creation of efficient code review automation agents and also research on ways to make this process more efficient. Parameter Efficient Fine-Tuning (PEFT) methodologies such as LoRA and QLoRA have been explored with an additional focus on a hybrid model that combines adaptive QLoRA with contrastive RAG - to figure out efficient ways to reduce memory required for fine-tuning also making sure inference quality is not compromised. Context has been utilized from general purpose Meta Llama 3.2 3B model. Experiments show that the hybrid approach reduces memory utilization by nearly 50% while achieving low entropy values. The results also show improved performance over baseline systems both in efficiency and inference stability - highlighting the potential of this hybrid technique for real-world code review automation.

Available for download on Saturday, May 30, 2026

Share

COinS