Deception and Lie Detection Using Reduced Linguistic Features, Deep Models and Large Language Models for Transcribed Data
Publication Date
1-1-2024
Document Type
Conference Proceeding
Publication Title
Proceedings - 2024 IEEE 48th Annual Computers, Software, and Applications Conference, COMPSAC 2024
DOI
10.1109/COMPSAC61105.2024.00059
First Page
376
Last Page
381
Abstract
In recent years, there has been a growing interest in and focus on the automatic detection of deceptive behavior. This attention is justified by the wide range of applications that deception detection can have, especially in fields such as criminology. This study specifically aims to contribute to the field of deception detection by capturing transcribed data, analyzing textual data using Natural Language Processing (NLP) techniques, and comparing the performance of conventional models using linguistic features with the performance of Large Language Models (LLMs). In addition, the significance of applied linguistic features has been examined using different feature selection techniques. Through extensive experiments, we evaluated the effectiveness of both conventional and deep NLP models in detecting deception from speech. Applying different models to the Real-Life Trial dataset, a single layer of Bidirectional Long Short-Term Memory (BiLSTM) tuned by early stopping outperformed the other models. This model achieved an accuracy of 93.57% and an F1 score of 94.48%.
Funding Number
2319802
Funding Sponsor
National Science Foundation
Keywords
Deception Detection, Large Language Models (LLM), Linguistic Features, Natural Language Processing (NLP)
Department
Computer Science
Recommended Citation
Tien Nguyen, Faranak Abri, Akbar Siami Namin, and Keith S. Jones. "Deception and Lie Detection Using Reduced Linguistic Features, Deep Models and Large Language Models for Transcribed Data" Proceedings - 2024 IEEE 48th Annual Computers, Software, and Applications Conference, COMPSAC 2024 (2024): 376-381. https://doi.org/10.1109/COMPSAC61105.2024.00059