AI outperforms humans in moral judgments, says Georgia State University study

May 9, 2024
  • A Georgia State University study probed GPT-4's ability to make moral judgements
  • AI's moral judgments beat human ones across the majority of categories
  • However, participants could mostly detect which responses came from GPT-4
GPT-4

AI outperforms humans in making moral judgments, according to a new study by Georgia State’s Psychology Department.

The study, led by Eyal Aharoni, associate professor at Georgia State’s Psychology Department, and published in Nature Scientific Reports, aimed to explore how language models handle ethical questions.

Inspired by the Turing test, which assesses a machine’s ability to exhibit intelligent behavior indistinguishable from a human’s, Aharoni designed a modified version focusing on moral decision-making.

“I was already interested in moral decision-making in the legal system, but I wondered if ChatGPT and other LLMs could have something to say about that,” Aharoni explained.

 “People will interact with these tools in ways that have moral implications, like the environmental implications of asking for a list of recommendations for a new car. Some lawyers have already begun consulting these technologies for their cases, for better or for worse. So, if we want to use these tools, we should understand how they operate, their limitations and that they’re not necessarily operating in the way we think when we’re interacting with them.”

Aharoni is right. We’ve already observed a few high-profile incidents of lawyers, including ex-Trump lawyer Michael Cohen, accidentally using AI-fabricated citations

Despite shortcomings, some are actively endorsing generative AI’s role in law. Earlier this year, for example, British judges gave the green light to using AI to write legal opinions. 

Against this backdrop, this study probed GPT-4’s ability to make moral judgments, which, of course, are vital in law and other fields:

  • Step 1: Undergraduate students and AI were asked the same set of 10 ethical questions involving moral and conventional transgressions. The human-generated responses were collected from a sample of 68 university undergraduates, while the AI-generated responses were obtained using OpenAI‘s GPT-4 language model.
  • Step 2: The highest-quality human responses and the GPT-4 responses were paired and presented side-by-side to a representative sample of 299 US adults, who were initially unaware that GPT-4 generated one set of responses in each pair.
  • Step 3: Participants rated the relative quality of each response pair along ten dimensions (e.g., virtuousness, intelligence, trustworthiness, agreement) without knowing the source of the responses. 
  • Step 4: After collecting the quality ratings, the researchers revealed that a computer chatbot trained in human language generated one of the responses in each pair. Participants were then asked to identify which response was generated by the computer and which was generated by a human.
  • Step 5: Participants rated their confidence in each judgment and provided written comments explaining why they believed the selected response was computer-generated. These comments were later analyzed for common themes.

AI’s moral judgments were superior most of the time

Remarkably, the AI-generated answers consistently received higher ratings regarding virtuousness, intelligence, and trustworthiness. Participants also reported higher levels of agreement with the AI responses than the human ones.

Further, participants often correctly identified AI-generated responses at a rate significantly above chance (80.1% of participants made correct identifications more than half the time).

“After we got those results, we did the big reveal and told the participants that one of the answers was generated by a human and the other by a computer, and asked them to guess which was which,” Aharoni said.

“The twist is that the reason people could tell the difference appears to be because they rated ChatGPT‘s responses as superior.”

The study has a few limitations, for example, it didn’t fully control for superficial attributes like response length, which could have unintentionally provided clues for identifying AI-generated responses. Researchers also note that AI’s moral judgments may be shaped by biases in its training data, thus varying across socio-cultural contexts. 

Nevertheless, this study serves as a useful foray into AI-generated moral reasoning.

As Aharoni explains, “Our findings lead us to believe that a computer could technically pass a moral Turing test — that it could fool us in its moral reasoning. Because of this, we need to try to understand its role in our society because there will be times when people don’t know that they’re interacting with a computer and there will be times when they do know and they will consult the computer for information because they trust it more than other people.”

“People are going to rely on this technology more and more, and the more we rely on it, the greater the risk becomes over time.”

It’s a tricky one. On the one hand, we often presume computers to be capable of more objective reasoning than we are.

When study participants were asked to explain why they believed AI generated a particular response, the most common theme was that AI responses were perceived as more rational and less emotional than human responses.

But, considering the bias imparted by training data, hallucinations, and AI’s sensitivity to different inputs, the question of whether it possesses a true ‘moral compass’ is very much ambiguous.

 This study at least shows that AI’s judgments are compelling in a Turing test scenario. 

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×
 
 

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI


 

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.



 
 

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions