Mrunal Zambre

Publication Date

Spring 2024

Degree Type

Master's Project

Degree Name

Master of Science in Computer Science (MSCS)


Computer Science

First Advisor

Ching-seh Wu

Second Advisor

Katerina Potika

Third Advisor

Mark Stamp


big five theory, fine-tuning, in-context learning, large language models, parameter-efficient fine-tuning, personality emulation, prompt engineering, quantized low- rank adaptation.


The quest for AI systems that can mirror the intricate aspects of human emotion and personality is crucial for enhancing their performance. This project delves into the capabilities of Large Language Models (LLMs) to mimic the Big Five personality traits in human-written essays by utilizing contextual prompts and fine-tuning methods. Diverging from traditional research in this domain, this project explores smaller, open-source LLMs, including LLaMA 2 7B chat, LLaMA 2 13B chat, and Vicuna v.15 13B, to assess their potential in personality prediction tasks, thereby making high-level personality emulation more accessible and practical for application integration. Through meticulous prompt engineering, we achieved a peak performance in the identification of the Conscientiousness trait, achieving a prediction accuracy of 59.5%. Additionally, post fine- tuning with the Quantized Low-Rank Adaptation (QLoRA) technique yielded a notable 4% increase in accuracy for the Openness to Experience trait. Our investigation also compares the performance of these models with the latest state-of-the-art (SOTA) methods, demonstrating a competitive stance, albeit without exceeding these benchmarks. By illustrating the performance of smaller, more accessible LLMs in capturing the complex spectrum of human personality, this study offers significant contributions to the domains of generative AI and psychology.

Available for download on Friday, May 23, 2025