Despite mounting concerns surrounding AI, Meta’s head of global affairs and communications, Sir Nick Clegg, argues that the technology is currently too “stupid” to pose such a danger.
This coincides with the unveiling of Meta’s new large language model (LLM), LLama 2.
Sir Nick – a former British politician – said on BBC Radio 4’s Today program, “I think a lot of the existential warnings relate to models that don’t currently exist, so-called super-intelligent, super-powerful AI models – the vision where AI develops an autonomy and agency on its own, where it can think for itself and reproduce itself. The models that we’re open-sourcing are far, far, far short of that. In fact, in many ways they’re quite stupid.”
The Llama AI chatbot, an open-source project by Meta, enables individuals and organizations to tune and optimize the model at will, including training it on their own internal data.
This will potentially undermine rival companies, such as OpenAI, that charge businesses for using their AIs.
Llama 2 is a serious competitor to proprietary AI models
Meta’s Llama 2 will be made available as part of a collaboration with Microsoft.
This is interesting considering Microsoft’s extensive investment in OpenAI, one of Llama’s rivals, ChatGPT. Big tech investment and research have become completely entangled, with competitors sharing expertise and resources.
It’s a bizarre situation – Microsoft, OpenAI’s biggest backer, claims to not be in the loop with OpenAI’s strategies, whereas OpenAI is openly collaborating with some of Microsoft’s rivals.
Meta’s first LLM, LLaMA, was leaked to the public via 4Chan. When the public got their hands on the model, communities began tuning and optimizing it, including slimming it down for deployment on a MacBook, smartphone, and even a Raspberry PI.
Some argue that companies like OpenAI are gunning for regulation as they know this will stunt the open-source community’s growth, which has emerged as a major disruptive force in the AI industry.
Despite fears that open-source AI could expose it to misuse by nefarious actors, Meta insists that its AI won’t be used to promote harmful activities.
When asked about these models’ possible dangers, Sir Nick stated that they “cannot build a nuclear bomb.”
Mark Zuckerberg further argued that open-source models could boost safety through “improved safety and security because when software is open, more people can scrutinize it.”
Meta’s terms of use specifically prohibit using Llama to promote violence, create computer viruses, build weapons, develop nuclear technology, spread spam, promote hate speech, or share child abuse content.
However, these are mere terms for downloading the model. Open-source is open-source, and the public will ultimately do what they want with Llama 2.
According to Sir Nick, Meta’s decision to open-source its technology is not unusual. He said, “It’s not as if we’re at a T-junction where firms can choose to open source or not. Models are being open-sourced all the time already.”
He also stated that current AI tools aren’t as advanced as they were often portrayed, describing generative AIs like ChatGPT as being able to “guess at great speed” but have “no innate autonomous intelligence at all.”
What the AI community manages to do with Llama 2 will make for compelling observation.