AI joined topics such as Ukraine, Putin, and conflict in the Middle East at the 2024 World Economic Forum in Davos.
The event witnessed an evidentially fragmented debate on the technology, with politicians and UN officials mixing their views with industry leaders like OpenAI CEO Sam Altman and Microsoft CEO Satya Nadella.
Prior to the forum, the WEF’s “Global Risks Report 2024” placed AI-driven misinformation and disinformation at the forefront, remarkably surpassing climate change, warfare, and economic disruption.
Carolina Klint, Chief Commercial Officer for Europe at Marsh McLennan, a collaborator on the report, told CNBC of the report, “AI can build out models for influencing large populations of voters in a way that we haven’t seen before. How that is going to play out is going to be quite important for us to watch.”
AI misinformation and election interference were key topics at the event itself, with debates weighing up the pros of AI-driven productivity and efficiency against the cons of misinformation, deep fakes, and other potentially catastrophic consequences.
Overall, the tone of this debate showed a greater focus on the near-term risks of AI than more abstract, longer-term extinction-level consequences. With major elections forthcoming this year, there’s growing anxiety over how AI might be employed to influence voting behavior.
Those representing the AI industry were sometimes cautiously optimistic, sometimes bullish, and sometimes skeptical about the industry’s ability to handle these risks.
Let’s take a look at some key political insights on AI from the week:
UN Secretary-General António Guterres: a cautionary stance on AI
António Guterres delivered a powerful message about the risks associated with the rapid expansion of AI, particularly criticizing the behavior of large technology companies.
He cautioned, “Powerful tech companies are already pursuing profits with a reckless disregard for human rights, personal privacy, and social impact.”
Highlighting the broader implications of AI, Guterres drew a parallel between the challenges posed by AI and the climate crisis, noting the lack of a global strategy to address both issues.
Guterres emphasized the urgent need for collaborative governance, saying, “We need governments urgently to work with tech companies on risk management frameworks for current AI development; and on monitoring and mitigating future harms.”
Ursula von der Leyen: a more optimistic view
Meanwhile, Ursula von der Leyen, President of the European Commission, offered a more optimistic outlook on AI.
In her address, she acknowledged the potential risks but also emphasized the opportunities AI presents. Von der Leyen said, “AI is also a very significant opportunity if used in a responsible way.”
Von der Leyen also highlighted initiatives to help harness AI’s potential, mentioning a proposal to connect European AI startups with the computing power of the Continent’s supercomputers.
She compared this initiative to Microsoft’s approach with ChatGPT, saying, “We will provide European startups and [small and medium-sized businesses] with access to our own world-class supercomputers.”
Geopolitical clashes around AI
Chinese Premier Li Qiang made a sales pitch for multinational companies to invest in China’s economy, which has faced challenges due to protracted trading restrictions from the US. Nvidia, for example, has been repeatedly blocked from shipping its AI-focused GPUs to China.
Li Qiang expressed a cautious stance towards AI during his recent address. He poignantly remarked, “Human beings must control the machines instead of having the machines control us.”
Emphasizing the need for responsible development, he added, “AI must be guided in a direction that is conducive to the progress of humanity, so there should be a redline in AI development — a red line that must not be crossed.”
Li Qiang outlined a five-point plan to improve economic cooperation between China and the West.
He emphasized the need for greater macroeconomic coordination, unhindered supply chains centered around China, increased collaboration on environmental goals and enhanced technological cooperation.
While he didn’t mention the US directly, he repeatedly alluded in a negative fashion to unspecified nations, symbolically referring to the US.
Li aimed to reassure global investors who had grown wary of China due to its economic slowdown and increased government intervention. He emphasized that investing in the Chinese market was an opportunity, not a risk, and stated that the Communist Party welcomed foreign investors.
Following Li’s pitch, Ursula von der Leyen’s speech in Davos centered on democracy and Western values. She mentioned democracy and freedom multiple times, portraying Europe as a global leader in addressing global challenges and emphasizing the importance of collaboration.
However, divisions among world leaders were evident, as many members of the US delegation had already left prior to China’s talks, again showing the symbological loggerheads we’ve become accustomed to.
Rising concerns over AI in election campaigns
Alexandra Reeve Givens, CEO of the Center for Democracy and Technology, expressed concerns about using AI in elections, citing past situations where technology was used to influence voting behavior.
That, in itself, is no new concept. The point is, however, that AI brings unprecedented realism to fake propaganda and helps scale questionable tactics like robo-calling.
“In previous election cycles, we’ve seen robocalls. We’ve seen automated text message campaigns that are sending incorrect information to voters about their voting location, about whether or not their poll is open or targeted, manipulated messages that are designed to influence their behavior,” Givens noted.
She argued how generative AI will exacerbate these issues: “Generative AI makes it easier to target that. […] it’s easier than ever to come up with those tailored, personalized messages.”
Tech companies defended their ability to act on deep fakes. OpenAI CEO Sam Altman referred to recent strategies his company set out ahead of the US election, and Matthew Brittin, president of EMEA Business & Operations for Google, spoke about the potential of large language models (LLMs) to fight misinformation.
“We want to be in a position where anything that’s produced by our generative AI can be watermarked so invisibly, even if a snippet of a video or a piece of an image is used, that it will be detectable automatically,” Brittin explained, referring to SynthID, a Google DeepMind tool embedding digital watermarks in AI-generated content.
To date, there’s little evidence these techniques work effectively at scale. For instance, a paper published by researchers from the University of California in October concluded that “All invisible watermarks are vulnerable.”
Moreover, even if commercial AI image generators like DALL-E employ an effective watermark, companies are not generally obligated to do so.
There’s another side to this, too, as adding AI watermarks into authentic images means bad actors can discredit real images as fake, termed “liar’s dividend.”
AI will continue to feature heavily in political discourse this year.