At the January 2024 World Economic Forum, tech leaders and politicians alike descended upon Davos in Switzerland to discuss AI, among other topics.
As Fortune’s tech editor Alexei Oreskovic described, this annual event looks and feels like an AI conference, with leaders from global tech giants Microsoft, OpenAI, Google, Meta, etc, joining politicians from across the political spectrum.
With discussions leaping from AI opportunities to AI apocalypses, reports illustrate palpable confusion about the technology’s direction.
Here’s a round-up of talks from AI leaders and researchers at the event thus far:
Microsoft CEO Satya Nadella on AI optimism
Satya Nadella, Microsoft’s Chairman and CEO, spoke about the future of AI, expressing a mix of optimism and caution.
Nadella, in a conversation with Klaus Schwab, chairperson of the World Economic Forum, shared his hopeful stance on AI. “I am hopeful and optimistic about the future of artificial intelligence,” he stated.
He also spoke on the late-2023 OpenAI leadership debacle, stating, commenting on the governance structure of OpenAI, “I’m comfortable. I have no issues with any structure.”
OpenAI’s board, responsible for safeguarding the startup’s mission to develop beneficial AI, reinstated Altman shortly after firing him and is now expanding its board membership.
Altman himself mentioned, “I expect us to make a lot of progress on that in the coming months,” adding, “And then after that, the new board will take a look at the governance structure.”
Nadella further addressed the potential of AI to greatly enhance productivity and its applications in diverse fields, such as better job creation, education, and medical treatments. Yet, he didn’t shy away from the serious concerns AI raises, including implications for employment and ethical challenges.
He also emphasized the need for global cooperation in adopting industry standards for AI, stating, “These are global challenges and require global norms and standards.”
Reflecting on the industry’s responsibility, Nadella pointed out the importance of considering safety, trust, and equity in developing and deploying AI technologies.
He also discussed Microsoft’s recent breakthrough in battery technology, enabled by AI.
OpenAI CEO Sam Altman on AI’s Energy Demands
Sam Altman, CEO of OpenAI, addressed the energy demands of AI. He mentioned that an energy breakthrough is necessary to sustain the future demands of AI.
“AI will consume vastly more power than people expected,” he stated, suggesting that more climate-friendly energy sources like nuclear fusion or cheaper solar power are vital for AI’s progress. Reports have stated AI model training uses the same electricity as a small country.
Altman went on to emphasize the importance of breakthroughs in energy technologies and their potential positive impact on climate change.
Altman proposed adopting eco-friendly energy sources, such as solar energy and nuclear fusion, to solve the environmental impact of AI’s energy demands.
https://www.youtube.com/watch?v=wHeSBEaNmxQ
Altman further highlighted the swift pace at which AI is evolving, surpassing previous technological advancements in Silicon Valley. He acknowledged that this rapid development will necessitate making “uncomfortable” decisions.
Altman further explained that future AI models would need to offer a high degree of individual customization, providing different answers based on users’ values, preferences, and possibly their country of residence. He anticipates, “That’s going to make a lot of people uncomfortable.”
On the subject of AI adapting to different cultural norms, Altman clarified, “If the country said, you know, all gay people should be killed on sight, then no…that is well out of bounds. But there are probably other things that I don’t personally agree with, but a different culture might…We have to be somewhat uncomfortable as a tool builder with some of the uses of our tools.”
This speaks to the broader problem of cultural universalism in AI systems, which essentially fold human creativity and experiences into singular, culturally narrow entities.
Lila Ibrahim, Chief Operating Officer of DeepMind on scientific breakthroughs
Lila Ibrahim, COO of DeepMind, highlighted AI’s contributions to scientific progress in an interview with Axios.
In 2023, DeepMind developed GNoME to identify 2.2 million potential new materials as part of an ‘autonomous research lab,’ with implications for developing new types of chips, batteries, and solar panels.
DeepMind’s AlphaFold is another example of an AI tool that solved the decades-old challenge of determining protein structures and showcases the enormous potential of AI in biology.
Ibrahim noted a shift from when she joined DeepMind in 2018, recalling that “AlphaFold was an idea that wasn’t working” at that time. Now, this tool has successfully identified 200 million known proteins.
AI for scientific discovery and problem-solving was a topic touched on by others on the day.
Yann LeCun, Chief AI Scientist at Meta on open-source AI
Yann LeCun, Chief AI Scientist at Meta, spoke on the significance of open-source AI in cultivating rapid technological advancement.
LeCun underscored Meta’s contributions to the open-source AI community, which he views as the antidote to big tech’s monopolies.
He emphasized that collaborative efforts and knowledge sharing are crucial for driving innovation in AI. LeCun’s advocacy for open-source AI indicates its role in accelerating the development and accessibility of AI technologies beyond big tech’s own visions.
In the same talk, Andrew Ng, formerly from Google Brain, discussed how Gemini Ultra and V image search models are pushing the boundaries already this year and that AI is set to accelerate.