Science journal Nature surveys 1,600 researchers about AI

September 27, 2023
Nature AI

A recent Nature survey involving more than 1,600 global researchers shed light on AI’s increasing role in science and research. 

The survey revealed that while many are optimistic about the potential benefits of AI in science, there is rising concern about how AI is changing the science industry as a whole. 

The study participants included:

  • Those who actively develop or study AI (48%)
  • Those who use AI tools for their research but don’t develop them (30%)
  • Those who do not use AI in their scientific pursuits (22%)

The rise of AI in scientific research

The study found that, over the past decade, there has been a noticeable uptick in research papers referencing AI terms. 

Data analysis and statistical techniques using machine learning (ML) have become the norm. 

Additionally, generative AI, particularly large language models (LLMs), are being used to generate text, images, and code for scientific research. 

Some key stats about the benefits of AI in research:

  • 66% of researchers note that AI enables quicker data processing
  • 58% believe it accelerates previously infeasible computations
  • 55% feel it’s a cost-effective and time-saving solution

Irene Kaplow, a computational biologist at Duke University, described, “AI has enabled me to make progress in answering biological questions where progress was previously infeasible.”

Concerns

However, there’s a flip side to this coin. Researchers voiced concerns about the following:

  • Increased dependence on pattern recognition without genuine understanding (69%)
  • Potentially perpetuating biases or discrimination in the results (58%)
  • The ease with which fraudulent activities might be conducted (55%)
  • The chance of unrepeatable research outcomes due to careless AI utilization (53%)

Jeffrey Chuang, an expert in cancer image analysis at the Jackson Laboratory, highlighted, “The main problem is that AI is challenging our existing standards for proof and truth.”

LLMs in the spotlight

LLMs, particularly ChatGPT, were often cited as invaluable AI tools in science. However, these models were also atop the list of concerning AI tools. 

The leading worries included:

  • Proliferation of misinformation (68%).
  • Facilitated plagiarism (68%).
  • Introduction of errors in research documents (66%).

Isabella Degen, an AI in medicine researcher at the University of Bristol, commented, “There is clearly misuse of large language models. We don’t understand well where the border between good use and misuse is.”

Furthermore, AI tool ownership and computing resources were highlighted as barriers to modern research. GPUs are exceptionally costly, and it’s tough for research institutions to train their powerful models in-house. 

As Garrett Morris, a University of Oxford chemist, described, “Only a very small number of entities on the planet have the capabilities to train the very large models. That constraint is limiting science’s ability to make discoveries.”

Overall, most researchers believe AI is an irreversible force in science. 

As Yury Popov, a liver disease specialist at the Beth Israel Deaconess Medical Center, concluded, “AI is transformative. We have to focus now on how to make sure it brings more benefit than issues.”

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×
 
 

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI


 

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.



 
 

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions