Building Socially Intelligent AI Systems: Evidence from the Trust Game Using Artificial Agents with Deep Learning
Publication Date
12-1-2023
Document Type
Article
Publication Title
Management Science
Volume
69
Issue
12
DOI
10.1287/mnsc.2023.4782
First Page
7236
Last Page
7252
Abstract
The trust game, a simple two-player economic exchange, is extensively used as an experimental measure for trust and trustworthiness of individuals. We construct deep neural network–based artificial intelligence (AI) agents to participate a series of experiments based upon the trust game. These artificial agents are trained by playing with one another repeatedly without any prior knowledge, assumption, or data regarding human behaviors. We find that, under certain conditions, AI agents produce actions that are qualitatively similar to decisions of human subjects reported in the trust game literature. Factors that influence the emergence and levels of cooperation by artificial agents in the game are further explored. This study offers evidence that AI agents can develop trusting and cooperative behaviors purely from an interactive trial-and-error learning process. It constitutes a first step to build multiagent-based decision support systems in which interacting artificial agents are capable of leveraging social intelligence to achieve better outcomes collectively.
Funding Sponsor
San José State University
Keywords
artificial intelligence, decision support system, deep Q-network, interactive learning, social intelligence, trust, trustworthiness
Department
Global Innovation and Leadership
Recommended Citation
Jason Xianghua Wu, Yan Wu, Kay Yut Chen, and Lei Hua. "Building Socially Intelligent AI Systems: Evidence from the Trust Game Using Artificial Agents with Deep Learning" Management Science (2023): 7236-7252. https://doi.org/10.1287/mnsc.2023.4782