AI as Humans? Using LLMs to Synthesize Human Responses in Persuasive Contexts

Abstract:

Generative AI and LLMs are not only enhancing the efficiency of communication research; they are fundamentally reshaping how scholars observe and understand the intricacies of human communication processes and effects. By automating tasks that once required extensive human efforts such as experimental design and data annotation, LLMs provide new ways to approach complex communication dynamics and enhance the diversity and inclusiveness in communication research. This talk explores the capability of LLMs to synthesize and replicate human-AI persuasive conversations, illuminating LLM’s potential in predicting how humans converse with persuasive AI and their resultant belief changes. Using the setup and findings from a recent Science paper (Costello et al., 2024), we set up two GPT-4o agents where one replicates the persuasive AI chatbot and the other simulates a human participant by using corresponding demographic and other personal information. Findings suggest significantly lower belief change in the simulated conditions compared to the original human-chatbot interactions, suggesting that while LLM-based simulations can approximate conversational patterns, they may fall short in replicating the strength of persuasive effects observed in human-AI dialogues. Implications of these findings for the use of AI in communication interventions and the study of belief modification are discussed.

Speaker:

Prof. Jingwen Zhang

Associate Professor

Department of Communication

University of California, Davis

Related Posts