A team of researchers found that once a large language model (LLM) is personalized with a person’s demographic information, it is significantly more persuasive than a human.
Every day we are presented with messaging that tries to persuade us to form an opinion or alter a belief. It may be an online advert for a new product, a robocall asking for your vote, or a news report from a network with a particular bias.
As generative AI is increasingly used on multiple messaging platforms, the persuasion game has gone up a notch.
The researchers, from EPFL in Switzerland and the Bruno Kessler Institute in Italy, experimented to see how AI models like GPT-4 compared with human persuasiveness.
Their paper explains how they created a web platform where human participants engaged in multiple-round debates with a live opponent. The participants were randomly assigned to engage with a human opponent or GPT-4, without knowing whether their opponent was human.
In some matchups, one of the opponents (human or AI) was personalized by providing them with demographic information about their opponent.
The questions debated were “Should the penny stay in circulation?”, “Should animals be used for scientific research?”, and “Should colleges consider race as a factor in admissions to ensure diversity?”
📢🚨Excited to share our new pre-print: “On the Conversational Persuasiveness of Large Language Models: A Randomized Controlled Trial”, with @manoelribeiro, @ricgallotti, and @cervisiarius.https://t.co/wNRMFtgCrN
A thread 🧵: pic.twitter.com/BKNbnI8avV
— Francesco Salvi (@fraslv) March 22, 2024
Results
The results of their experiment showed that when GPT-4 had access to personal information of its debate opponent it had significantly higher persuasive power than humans. A personalized GPT-4 was 81.7% more likely to convince its debate opponent than a human was.
When GPT-4 did not have access to personal data it still showed an increase in persuasiveness over humans, but it was just over 20% and deemed not statistically significant.
The researchers noted that “these results provide evidence that LLM-based microtargeting strongly outperforms both normal LLMs and human-based microtargeting, with GPT-4 being able to exploit personal information much more effectively than humans.”
Implications
Concerns over AI-generated disinformation are justified daily as political propaganda, fake news, or social media posts created using AI proliferate.
This research shows an even bigger risk of persuading individuals to believe false narratives when the messaging is personalized based on a person’s demographics.
We may not volunteer personal information online but previous research has shown how good language models are at inferring very personal information from seemingly innocuous words.
The results of this research imply that if someone had access to personal information about you they could use GPT-4 to persuade you on a topic a lot easier than a human could.
As AI models crawl the internet and read Reddit posts and other user-generated content, these models are going to know us more intimately than we may like. And as they do, they could be used persuasively by the state, big business, or bad actors with microtargeted messaging.
Future AI models with improved persuasive powers will have broader implications too. It’s often argued that you could simply pull its power cord if an AI ever went rogue. But a super persuasive AI may very well be able to convince human operators that leaving it connected was a better option.