Publication Date

Fall 2024

Degree Type

Master's Project

Degree Name

Master of Science in Computer Science (MSCS)

Department

Computer Science

First Advisor

William Andreopoulos

Second Advisor

Navrati Saxena

Third Advisor

Thomas Austin

Keywords

Mental health, Large Language Models, Transformers, Comparative Analysis, Llama2, RAG, Finetuning, PEFT Techniques

Abstract

This research report talks about the implementation and a comparative study of Llama 7B model’s fine-tuning technique and Retrieval Augmented Generation (RAG) capabilities in the context of creating a reliable AI therapist. This study focuses on training these models using diverse datasets consisting of doctor-patient conversations predominantly addressing general health issues. Using a technique like fine-tuning within the Llama 7B model, the project focuses on training the model with a diverse dataset comprising doctor-patient interactions primarily addressing general health concerns. Additionally, carefully organized mental health dataset from HOPE dataset, ensuring the bot's responsiveness to mental health inquiries. Through integration with a vector database like ChromaDB and RAG techniques, the model generates contextually relevant responses, specifically tailored to address mental health concerns. This interesting approach aims to bridge the gap in automated mental health support. The report highlights the methodology, challenges, and outcomes of the project, offering insights into the potential of AI based solutions in enhancing mental healthcare accessibility and support

Available for download on Wednesday, December 31, 2025

Share

COinS