PauseAI protestors demand a halt to training of AI models

May 14, 2024

  • PauseAI is a group of activists calling for a pause in training AI models more powerful than GPT-4
  • The group arranged protests in multiple cities around the world to raise awareness of AI dangers
  • The protestors want world leaders to use the upcoming AI Seoul Summit to address their demands

AI safety activist group PauseAI coordinated global protests to call for a pause in development of AI models more powerful than GPT-4.

Protestors came together on Monday in 14 cities around the world, including New York, London, Sydney, and Sao Paulo to raise awareness of potential AI risks.

The stated goal of the protest according to PauseAI’s website is to “convince the few powerful individuals (ministers) who will be visiting the next AI Safety Summit (the 22nd of May) to be the adults in the room. It’s up to us to make them understand that the risks are too large to ignore, that they are the only ones who have the power to fix the problem.”

The AI Seoul Summit, which takes place on 21 and 22 May, is the 6-month follow-up to the UK AI Safety Summit held last November. The initial enthusiasm for international cooperation on AI safety seems to be wavering, with multiple participants pulling out of the upcoming summit.

PauseAI is pragmatic in acknowledging that “we cannot expect countries or companies to risk their competitive advantage by pausing AI training runs for a long time if other countries or companies do not do the same. This is why we need a global Pause.”

They want the primary goal of the upcoming summit to be the establishment of an international AI safety agency, similar to the IAEA. Ironically, this is something that OpenAI CEO Sam Altman also suggested.

PauseAI’s protests calling for a halt to more advanced models took place on the same day that OpenAI released GPT-4o, its faster and more powerful iteration of its GPT-4 model.

While AI fans marveled over GPT-4o’s new capabilities, the PauseAI protestors were less enthusiastic over the exponential growth in new AI models.

Will sit-ins, placards, and handing out leaflets be enough to coax world leaders into taking the meaningful action PauseAI says humanity needs? Can the inexorable march of AI advancements be stopped long enough to ensure that AI safety guardrails are in place?

PauseAI thinks it’s possible. In a post on X, the group said, “AGI is not inevitable. It requires hordes of engineers with million-dollar paychecks. It requires a fully functional and unrestricted supply chain of the most complex hardware. It requires all of us to allow these companies to gamble with our future.”

Protests have been effective in bringing about action against GMO foods, nuclear weapons, and climate change. But, the inherent dangers of those issues are more widely accepted by world leaders.

The arguments over future AI safety issues have to compete with growing excitement over tangible AI benefits that humanity is experiencing right now.

Will PauseAI protestors achieve their objective and earn humanity’s gratitude? Will their fears ultimately prove to be unfounded? Or will they be the ones that eventually say “We told you so.”

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Eugene van der Watt

Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions