The 2024 Nature Index supplement on Artificial Intelligence, released this week, reveals a scientific world in the throes of an AI-driven paradigm shift.
This annual report, published by the journal Nature, tracks high-quality science by measuring research outputs in 82 natural science journals, selected by an independent panel of researchers.
The latest edition illustrates how AI is not just changing what scientists study, but fundamentally altering how research is conducted, evaluated, and applied globally.
One of the most striking trends revealed in the Index is the surge in corporate AI research. US companies have more than doubled their output in Nature Index journals since 2019, with their Share (a metric used by the Index to measure research output) increasing from 51.8 to 106.5.
However, this boom in R&D activity comes with a caveat – it still only accounts for 3.8% of total US AI research output in these publications. In essence, despite a major uplift in corporate AI R&D, we’ve not seen those efforts reflected in public research output.
This raises questions about where corporate AI research is located. Are companies publishing their most groundbreaking work in other venues, or keeping it under lock and key?
The answer is one of competing names and narratives. OpenAI, Microsoft, Google, Anthropic, and a handful of others are firmly entrenched in the closed-source model, but the open-source AI industry, led by Meta, Mistral, and others, is rapidly gaining ground.
Contributing to this, the funding disparity between private companies and public institutions in AI research is staggering.
In 2021, according to Stanford University’s AI Index Report, private sector investment in AI worldwide reached approximately $93.5 billion.
This includes spending by tech giants like Google, Microsoft, and Amazon, as well as AI-focused startups and other corporations across various industries.
In contrast, public funding for AI research is much lower. The US government’s non-defense AI R&D spending in 2021 was about $1.5 billion, while the European Commission allocated around €1 billion (approximately $1.1 billion) for AI research that year.
This gaping void in resource expenditure is giving private companies an advantage in AI development. They can afford more powerful computing resources and larger datasets and attract top talent with higher salaries.
“We’re increasingly looking at a situation where top-notch AI research is done primarily within the research labs of a rather small number of mostly US-based companies,” explained Holger Hoos, an AI researcher at RWTH Aachen University in Germany.
While the US maintains its lead in AI research, countries like China, the UK, and Germany are emerging as major hubs of innovation and collaboration.
However, this growth isn’t uniform across the globe. South Africa stands as the only African nation in the top 40 for AI output, showing how the digital divide is at risk of deepening in the AI era.
AI in peer review: promise and peril
Peer review ensures academic and methodological rigor and transparency when papers are submitted to journals.
This year, a nonsense paper with giant AI-generated rat testicles was published in Frontiers, indicating how the peer review process is far from impenetrable.
Someone used DALL-E to create gobbledygook scientific figures and submitted them to Frontiers Journal. And guess what? The editor published it. LOLhttps://t.co/hjQkRQDkal https://t.co/aV1USo6Vt2 pic.twitter.com/VAkjJkY4dR
— Veera Rajagopal (@doctorveera) February 15, 2024
Recent experiments have shown that AI can generate research assessment reports that are nearly indistinguishable from those written by human experts.
Last year, an experiment testing ChatGPT’s peer reviews versus human reviewers on the same paper found that over 50% of the AI’s comments on the Nature papers and more than 77% on the ICLR papers aligned with the points raised by human reviewers.
Of course, ChatGPT is much quicker than human peer reviewers. “It’s getting harder and harder for researchers to get high-quality feedback from reviewers,” said James Zou from Stanford University, the leader researcher for that experiment.
AI’s relationship with research is raising fundamental questions about scientific evaluation and whether human judgment is intrinsic to the process. The balance between AI efficiency and human insight is one of several key issues scientists from all backgrounds will need to grapple with in the years ahead.
AI might soon be capable of managing the entire research process from start to finish, potentially sidelining human researchers altogether.
For instance, Sakana‘s AI Scientist autonomously generates novel research ideas, designs and conducts experiments, and even writes and reviews scientific papers. This tempts a future where AI could drive scientific discovery with minimal human intervention.
On the methodology side, using machine learning (ML) to process and analyze data comes with risks. Princeton researchers argued that since many ML techniques can’t be easily replicated, this erodes the replicability of experiments – a key principle of high-quality science.
Ultimately, AI’s rise to prominence in every aspect of research and science is gaining momentum, and the process likely irreversible.
Last year, Nature surveyed 1,600 researchers and found that 66% believe that AI enables quicker data processing, 58% that it accelerates previously infeasible analysis, and 55% feel that it’s a cost and time-saving solution.
As Simon Baker, lead author of the supplement’s overview, concludes: “AI is changing the way researchers work forever, but human expertise must continue to hold sway.”
The question now is how the global scientific community will adapt to AI’s role in research, ensuring that the AI revolution in science benefits all of humanity, and without unforeseen risks wreaking havoc on science.
As with so many aspects of the technology, mastering both benefits and risks is challenging but necessary to secure a safe path forward.