Domain Specific Finetuning of LLMs Using PEFT Techniques
Publication Date
1-1-2025
Document Type
Conference Proceeding
Publication Title
2025 IEEE 15th Annual Computing and Communication Workshop and Conference Ccwc 2025
DOI
10.1109/CCWC62904.2025.10903789
First Page
484
Last Page
490
Abstract
As Large Language Models (LLMs) like ChatGPT and Gemini gain widespread adoption across industries, organizations increasingly seek methods to customize these models for domain-specific applications. This research evaluates four parameter-efficient fine-tuning approaches-LoRA, QLoRA, DoRA, and Prompt Tuning-applied to open-source models including LLaMA, Gemma, and Phi across the complex domains of immigration law and insurance. A custom GPT-4o model is developed and fine-tuned on the datasets to serve as a performance benchmark. LLM-as-a-Judge framework is applied to evaluate response relevance and correctness across various model configurations. Results demonstrate that LoRA and QLoRA consistently perform better than other techniques, providing an optimal balance between parameter efficiency and task-specific adaptation. The immigration domain models achieved higher performance due to structured datasets, while insurance tasks posed greater challenges due to their diverse and unstructured nature. Prompt Tuning, while computationally lightweight, underperformed in capturing domain complexity. Notably, fine-tuned open models achieved performance comparable to, and sometimes exceeding, larger models like GPT, indicating that smaller models optimized for specific domains can outperform larger, general-purpose models.
Keywords
DoRA, finetuning, Gemma, Large Language Models (LLM), LlaMA, LLM-as-a-Judge, LoRA, Phi, QLoRA
Department
Applied Data Science
Recommended Citation
Deva Kumar Gajulamandyam, Sainath Veerla, Yasaman Emami, Kichang Lee, Yuanting Li, Jinthy Swetha Mamillapalli, and Simon Shim. "Domain Specific Finetuning of LLMs Using PEFT Techniques" 2025 IEEE 15th Annual Computing and Communication Workshop and Conference Ccwc 2025 (2025): 484-490. https://doi.org/10.1109/CCWC62904.2025.10903789