Exploiting Large Language Models (LLMs) through Deception Techniques and Persuasion Principles

Publication Date

1-1-2023

Document Type

Conference Proceeding

Publication Title

Proceedings - 2023 IEEE International Conference on Big Data, BigData 2023

DOI

10.1109/BigData59044.2023.10386814

First Page

2508

Last Page

2517

Abstract

With the recent advent of Large Language Models (LLMs), such as ChatGPT from OpenAI, BARD from Google, Llama2 from Meta, and Claude from Anthropic AI, gain widespread use, ensuring their security and robustness is critical. The widespread use of these language models heavily relies on their reliability and proper usage of this fascinating technology. It is crucial to thoroughly test these models to not only ensure its quality but also possible misuses of such models by potential adversaries for illegal activities such as hacking. This paper presents a novel study focusing on exploitation of such large language models against deceptive interactions. More specifically, the paper leverages widespread and borrows well-known techniques in deception theory to investigate whether these models are susceptible to deceitful interactions. This research aims not only to highlight these risks but also to pave the way for robust countermeasures that enhance the security and integrity of language models in the face of sophisticated social engineering tactics. Through systematic experiments and analysis, we assess their performance in these critical security domains. Our results demonstrate a significant finding in that these large language models are susceptible to deception and social engineering attacks.

Funding Number

2319802

Funding Sponsor

National Science Foundation

Keywords

BARD, ChatGPT, Claude, Deception Techniques, Deception Theory, Large Language Models (LLM), Llama2, Prompt, Security, Social Engineering

Department

Computer Science

Share

COinS