Personalized LLMs are becoming more persuasive than humans

March 26, 2024
  • Researchers ran an experiment where participants debated with either a human or an LLM
  • The AI models were more persuasive than humans when they had access to demographic data
  • The research has sobering implications for AI marketing, propaganda, and disinformation

A team of researchers found that once a large language model (LLM) is personalized with a person’s demographic information, it is significantly more persuasive than a human.

Every day we are presented with messaging that tries to persuade us to form an opinion or alter a belief. It may be an online advert for a new product, a robocall asking for your vote, or a news report from a network with a particular bias.

As generative AI is increasingly used on multiple messaging platforms, the persuasion game has gone up a notch.

The researchers, from EPFL in Switzerland and the Bruno Kessler Institute in Italy, experimented to see how AI models like GPT-4 compared with human persuasiveness.

Their paper explains how they created a web platform where human participants engaged in multiple-round debates with a live opponent. The participants were randomly assigned to engage with a human opponent or GPT-4, without knowing whether their opponent was human.

In some matchups, one of the opponents (human or AI) was personalized by providing them with demographic information about their opponent.

The questions debated were “Should the penny stay in circulation?”, “Should animals be used for scientific research?”, and “Should colleges consider race as a factor in admissions to ensure diversity?”

Results

The results of their experiment showed that when GPT-4 had access to personal information of its debate opponent it had significantly higher persuasive power than humans. A personalized GPT-4 was 81.7% more likely to convince its debate opponent than a human was.

When GPT-4 did not have access to personal data it still showed an increase in persuasiveness over humans, but it was just over 20% and deemed not statistically significant.

The researchers noted that “these results provide evidence that LLM-based microtargeting strongly outperforms both normal LLMs and human-based microtargeting, with GPT-4 being able to exploit personal information much more effectively than humans.”

Implications

Concerns over AI-generated disinformation are justified daily as political propaganda, fake news, or social media posts created using AI proliferate.

This research shows an even bigger risk of persuading individuals to believe false narratives when the messaging is personalized based on a person’s demographics.

We may not volunteer personal information online but previous research has shown how good language models are at inferring very personal information from seemingly innocuous words.

The results of this research imply that if someone had access to personal information about you they could use GPT-4 to persuade you on a topic a lot easier than a human could.

As AI models crawl the internet and read Reddit posts and other user-generated content, these models are going to know us more intimately than we may like. And as they do, they could be used persuasively by the state, big business, or bad actors with microtargeted messaging.

Future AI models with improved persuasive powers will have broader implications too. It’s often argued that you could simply pull its power cord if an AI ever went rogue. But a super persuasive AI may very well be able to convince human operators that leaving it connected was a better option.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Eugene van der Watt

Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.

×
 
 

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI


 

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.



 
 

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions