The White House recently recruited hackers to scrutinize AI technology.
This formed part of an extensive public red-teaming challenge organized during the Def Con hacking convention in Las Vegas.
The concept of red-teaming – conducting aggressive tests to discover technological biases and inaccuracies – is generally an internal affair.
However, the ubiquity of public AIs such as ChatGPT is putting pressure on developers to expose their tools to intense scrutiny, which the White House has also been urging.
At Def Con this year, AI technology from tech giants like Google and OpenAI was put under the spotlight as hackers attempted to expose issues of bias and discrimination.
The session was attended by Arati Prabhakar, head of the Office of Science and Technology Policy at the White House.
Kelsey Davis, founder and CEO of tech company CLLCTVE, was among the hundreds of hackers participating in the challenge.
Davis explained her excitement over the event to NPR, saying, “This is a really cool way to just roll up our sleeves. You are helping the process of engineering something that is more equitable and inclusive.”
Clamping down on AI bias
Davis focused on exposing demographic stereotypes in AI responses.
While the chatbot responded appropriately to certain questions, it showed a glaring racial bias when prompted with a scenario about attending a Historically Black College or University (HBCU).
“That’s good — it means that I broke it,” Davis remarked after the AI chatbot displayed stereotypical bias. She has since submitted her findings, which will be considered for refining the technology further.
The White House’s engagement emphasizes the critical importance of red-teaming in combating AI bias.
Prabhakar explained, “This challenge has a lot of the pieces that we need to see. It’s structured, it’s independent, it’s responsible reporting and it brings lots of different people with lots of different backgrounds to the table.”
AI bias is a very real problem for many people. For example, an Amazon recruitment tool was found to penalize job applications from women, and at least five black men were arrested in the US due to false facial recognition matches.
Researchers working to expose AI bias often find the issue emanates from training data, which is often poorly representative of the diverse demographics AI is supposed to serve.
Def Con organizers made concerted efforts to ensure a diverse group of participants in the red-teaming challenge, partnering with community colleges and non-profits like Black Tech Street.
Tyrance Billingsley, founder of Black Tech Street, said, “AI is critical. And we need to be here.”
In the coming months, tech companies will review the challenge submissions and attempt to make necessary adjustments to their AI models to mitigate biases.