Researchers connect two AI models, enabling them to communicate

March 19, 2024

AIs talk

Scientists from the University of Geneva have bridged a gap in AI by creating an artificial neural network that learns tasks before communicating them to another AI, which can replicate them. 

Humans can grasp new tasks from short instructions and articulate the learned task well enough for another person to replicate it. This is integral to human communication and is a key feature of our conscious world. 

This fascinating study, detailed in Nature Neuroscience, grants AI a form of human communication and learning that has long evaded the technology. 

The project, led by Alexandre Pouget, a professor at the UNIGE Faculty of Medicine, alongside his team, delves into advanced techniques within natural language processing – a subset of AI focused on machine understanding and response to human language

Pouget explains the current limitations of AI in this context, noting in an article published on the University of Geneva’s website: “Currently, conversational agents using AI are capable of integrating linguistic information to produce text or an image. But, as far as we know, they are not yet capable of translating a verbal or written instruction into a sensorimotor action, and even less explaining it to another artificial intelligence so that it can reproduce it.”

The Geneva team enhanced an existing language-understanding artificial neural model (ANN), S-Bert.

They connected S-Bert to a smaller, simpler network, simulating the human brain’s language perception and production areas— the Wernicke and Broca areas.

Through training, this network could execute tasks based on written English instructions and then convey these tasks linguistically to a “sister” network, allowing the two AIs to communicate task instructions purely through language.

Reidar Riveland, a Ph.D. student involved in the study, explained, “We started with an existing model of artificial neurons, S-Bert, which has 300 million neurons and is pre-trained to understand language. We ‘connected’ it to another, simpler network of a few thousand neurons.”

The tasks ranged from simple instructions like pointing to a location to more complex commands requiring the identification of subtle contrasts between visual stimuli. 

Here are the study’s key achievements:

  • The AI system could both comprehend and execute instructions, correctly performing new, unseen tasks 83% of the time based on linguistic instructions alone.
  • The system could generate descriptions of learned tasks in a way that allowed a second AI to understand and replicate these tasks with a similar success rate.

This furthers the potential for AI models to learn and communicate tasks linguistically, opening new opportunities in robotics. 

It integrates linguistic understanding with sensorimotor functions, meaning AIs could converse and understand when an instruction requests it to perform a task like grabbing something off a shelf or moving in a certain direction. 

“The network we have developed is very small. Nothing now stands in the way of developing, on this basis, much more complex networks that would be integrated into humanoid robots capable of understanding us but also of understanding each other,” researchers shared of the study.

Along with recent massive investments in AI robotics companies like Figure AI, intelligent androids might be closer to reality than we think.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Sam Jeans

Sam is a science and technology writer who has worked in various AI startups. When he’s not writing, he can be found reading medical journals or digging through boxes of vinyl records.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions