Oxford University study demonstrates how biological learning trumps AI

January 3, 2024

AI neuroscience

Researchers from the MRC Brain Network Dynamics Unit and Oxford University’s Department of Computer Science have identified a novel means for comparing learning in AI systems and the human brain. 

The study begins by addressing a fundamental issue in both human and machine learning: credit assignment. This concept identifies which parts of the learning process are responsible for mistakes, which is intrinsic to the learning process itself. 

AI systems approach this through backpropagation, adjusting parameters to correct errors in output. 

Backpropagation works like a feedback loop. When an AI makes a prediction or decision that turns out to be incorrect, this method traces back through the network’s layers. 

The process identifies which parts of the computation contributed to the error and then adjusts those specific parts, effectively refining the AI’s decision-making process for future predictions.

The study, published in Nature Neuroscience, explains how backpropagation differs significantly from the human brain’s learning method. 

While AI traditionally relies on backpropagation to tackle errors, the researchers propose that the brain performs the same tasks through a process called ‘prospective configuration.’

In prospective configuration, the brain, instead of directly adjusting connections based on errors, first predict the ideal pattern of neural activity resulting from learning. Only after this prediction do changes to the neural connections occur. 

This method contrasts with backpropagation used in AI, where the process is reversed – connection adjustments lead, and changes in neural activity follow.

Crucially, prospective configuration, an approach likely shared by virtually all biological brains, offers a more efficient learning mechanism than backpropagation. 

Unlike AI, humans can rapidly ingest new information with minimal exposure and without eroding existing knowledge, a skill AI struggles to match.

This strategy not only preserves existing knowledge but also accelerates the learning process.

There’s still life in the old human brain yet

The team illustrates this concept with an analogy. Imagine a bear fishing for salmon: it uses the sight of the river and the smell of salmon to predict success. 

If the bear suddenly can’t hear the river due to a damaged ear, an AI model would incorrectly assume the absence of salmon. 

In contrast, the animal’s brain, operating on prospective configuration, would still rely on the smell to deduce the salmon’s presence.

This theory, backed by computer simulations, demonstrates that models using prospective configuration outperform traditional AI neural networks in learning efficiency.

Professor Rafal Bogacz, the lead researcher from MRC Brain Network Dynamics Unit and Oxford’s Nuffield Department of Clinical Neurosciences, described of the study: “There is currently a big gap between abstract models performing prospective configuration, and our detailed knowledge of anatomy of brain networks.”

“Future research by our group aims to bridge the gap between abstract models and real brains, and understand how the algorithm of prospective configuration is implemented in anatomically identified cortical networks.”

Co-author Dr. Yuhang Song further adds, “In the case of machine learning, the simulation of prospective configuration on existing computers is slow, because they operate in fundamentally different ways from the biological brain. A new type of computer or dedicated brain-inspired hardware needs to be developed, that will be able to implement prospective configuration rapidly and with little energy use.”

Bio-inspired AI is in the pipeline

Bio-inspired AI, also called neuromorphic AI, aims to create systems that can sense, think, and behave akin to natural organisms.

It focuses on elegance, adaptability, and energy efficiency – attributes inherent in biological systems.

The human brain, with its efficient use of energy and ability to thrive in varied environments, still trumps AI across numerous disciplines and applications.

Indeed, our brain, with minimal power, is conscious – a milestone AI has yet to accomplish by most estimates.

In contrast to the colossal power demands of current AI models like ChatGPT that require thousands of power-hungry GPUs, bio-inspired AI aims to develop more sustainable and adaptable systems.

There has been progress in this field of late, with IBM and Rain AI developing low-powered chips modeled on synaptic functions. 

OpenAI CEO Sam Altman backed Rain AI last year, and OpenAI aimed to secure millions of dollars of chips from them. 

Other novel approaches to bio-inspired AI include swarm intelligence, which seeks to mimic the collective decision-making of groups of insects, birds, and fish.  

As this field progresses, it promises to bridge the gaps identified in traditional AI models, leading us toward a future where machines are not just tools but entities with a degree of autonomy and environmental interaction. 

As the Oxford study demonstrates, though, there are fundamental questions for AI to answer before it can match biological brains. 

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions