Publication Date

Spring 2024

Degree Type

Master's Project

Degree Name

Master of Science in Computer Science (MSCS)


Computer Science

First Advisor

William Andreopoulos

Second Advisor

Navrati Saxena

Third Advisor

Thomas Austin


Prompt Engineering, Software Development, Large Language Models, LLaMA, Gemma, Mistral AI, PEFT, QLoRA


Starting a new project is a significant challenge in the software development world. Building a new project skeleton and configurations will require vast amounts of time and effort. This project aims to overcome the difficulty presented by this challenge using advanced large language models, specifically fine-tuning LLMs. Our initial focus with the implementation is to use the powerful capabilities of advanced modern models to simplify and accelerate the complicated process of getting new projects started. The solution process begins with a user posting a README file to a predetermined repository. This README file then is used as a source for generating the information needed to set up a project. Our system’s core is the customization of large language model to meet our project’s demands. It is trained on a dataset with a lot of Project Descriptions, that acquire all the intricate differences in project structures and configurations. Considering the need for fine-tuning rather than prompt engineering, the effectiveness of several novel PEFT techniques and how they match the traditional approach of full fine-tuning, and the improvements in performance of these open-source models such as LLaMA, Gemma, and Mistral AI when fine-tuned with custom datasets. To conclude, our project represents a breakthrough due to the novel project initialization approach based on LLMs Fine-Tuning. This project may minimize the effort put into the creation of new projects and simplify software development to help new users adapt to it more easily.

Available for download on Thursday, May 22, 2025