Publication Date
Spring 2025
Degree Type
Master's Project
Degree Name
Master of Science in Computer Science (MSCS)
Department
Computer Science
First Advisor
Melody Moh
Second Advisor
Teng-Sheng Moh
Third Advisor
Vidya Rangasayee
Keywords
Code Review Automation, Large Language Models, LoRA, QLoRA, Adaptive QLoRA, Retrieval Augmented Generation
Abstract
In this technological era where Artificial Intelligence and Machine Learning are revolutionizing various domains, Large Language Models (LLMs) are emerging as a very powerful tool. In the software development lifecycle, it is extremely important to have reliable code reviews to ensure security and maintain code quality. This project aims to survey various existing methodologies to aid creation of efficient code review automation agents and also research on ways to make this process more efficient. Parameter Efficient Fine-Tuning (PEFT) methodologies such as LoRA and QLoRA have been explored with an additional focus on a hybrid model that combines adaptive QLoRA with contrastive RAG - to figure out efficient ways to reduce memory required for fine-tuning also making sure inference quality is not compromised. Context has been utilized from general purpose Meta Llama 3.2 3B model. Experiments show that the hybrid approach reduces memory utilization by nearly 50% while achieving low entropy values. The results also show improved performance over baseline systems both in efficiency and inference stability - highlighting the potential of this hybrid technique for real-world code review automation.
Recommended Citation
Aradhya, Sumukh Naveen, "Enhancing Code Review Automation with Large Language Models using QLoRA Fine-Tuning and RAGs" (2025). Master's Projects. 1567.
DOI: https://doi.org/10.31979/etd.4a75-2ngc
https://scholarworks.sjsu.edu/etd_projects/1567