Welcome to this week’s roundup of AI news for sentient, conscious readers. You know who you are.
This week AI sparked debate over how smart or safe it is.
AI agents are learning by playing computer games.
And DeepMind wants to teach you how to kick a ball.
Let’s dig in.
Do AIs dream of electric sheep?
Can we expect an AI to become self-aware or truly conscious? What does “conscious” even mean in the context of AI?
Claude 3 Opus did something really interesting during training. Its response to an engineer has reawakened debates on AI sentience and consciousness. We’re entering Blade Runner territory sooner than some thought.
Does “I think, therefore I am,” apply only to humans?
These discussions on X are fascinating.
Funny, how AI optimists talked like, “AI is trained by imitating human data, so it’ll be like us, so it’ll be friendly!”, and not, “Our safety model made a load-bearing assumption that future ASI would be solely trained to imitate human outputs…” https://t.co/wJ6PRjt8R1
— Eliezer Yudkowsky ⏹️ (@ESYudkowsky) March 15, 2024
Inflection AI’s quest for “personal AI” might be over. The company’s CEO, Mustafa Suleyman, and other key staff jumped ship to join the Microsoft Copilot team. What does this mean for Inflection AI and other smaller players funded by Big Tech investments?
AI playing games
If 2023 was the year of the LLM, then 2024 is on track to be the year of AI agents. DeepMind demonstrated SIMA, a generalist AI agent for 3D environments. SIMA was trained using computer games and the examples of what SIMA can do are impressive.
Will AI solve the Soccer vs Football nomenclature debate? Unlikely. But it could help players score more goals. DeepMind is collaborating with Liverpool FC to optimize how the club’s players take corners.
It might be a while before robots replace humans on the field though.
Risky business
Will AI save the world or doom it? It depends on who you ask. Experts and tech leaders can’t agree on how intelligent AI is, how soon we’ll have AGI, or how much of a risk it poses.
Leading Western and Chinese AI scientists met in Beijing to discuss international efforts to ensure the safe development of AI. They agreed on several AI development ‘red lines’ that they say pose an existential threat to humanity.
If these red lines really are necessary, shouldn’t we have had them in place months ago? Does anyone believe the US or Chinese governments will pay them attention?
The EU AI Act was passed in a landslide in the European Parliament and will likely come into force in May. The list of restrictions is interesting, with some of the banned AI applications unlikely to ever make it onto a similar list in China.
The transparency of training data requirements will be particularly tricky for OpenAI, Meta, and Microsoft to satisfy without opening themselves up to even more copyright lawsuits.
Across the pond, the FTC is questioning Reddit over its deal to license user-generated data to Google. Reddit is preparing for its IPO but is feeling the heat from both regulators and Redditors who aren’t too happy about having their content sold for AI training fodder.
Apple playing AI catchup
Apple hasn’t exactly been blazing any AI trails, but it has been buying up several AI startups over the last few months. Its recent acquisition of a Canadian AI startup may give some insight into the company’s push for generative AI.
When Apple does produce impressive AI tech, it keeps the news pretty low-key until it eventually becomes part of one of its products. Apple engineers quietly published a paper that reveals MM1, Apple’s first family of multimodal LLMs.
MM1 is really good at visual question answering. Its ability to answer queries and reason on multiple images is particularly impressive. Will Siri learn to see soon?
Grok opens up
Grok just passed my sanity check pic.twitter.com/HYN3KTkRyX
— Jim Fan (@DrJimFan) December 7, 2023
Elon Musk has been critical of OpenAI’s refusal to open source its models. He announced that xAI would open-source its LLM, Grok-1, and promptly released the model’s code and weights.
The fact that Grok-1 is truly open-source (Apache 2.0 license) means that companies can use it for commercial ends instead of having to pay for alternatives like GPT-4. You’ll need some serious hardware to train and run Grok though.
The good news is that there may be some second-hand NVIDIA H100s going cheap soon.
New NVIDIA tech
NVIDIA unveiled new chips, tools, and Omniverse at its GTC event this week.
One of the big announcements was NVIDIA’s new Blackwell GPU computing platform. It offers big improvements in training and inference speed over even its most advanced Grace Hopper platform.
There’s already a long list of Big Tech AI companies that have signed up for the advanced hardware.
Researchers from the University of Geneva published a paper showing how they connected two AI models, enabling them to communicate with each other.
When you learn a new task, you can usually explain it well enough so that another person can use those instructions to perform the task themselves. This new research shows how to get an AI model to do the same.
Soon we could give instructions to a robot and then have it go off to explain them to a team of robots to get the job done.
In other news…
- Here are some other clickworthy AI stories we enjoyed this week:
- Sakana AI developed a method to merge the best components of different AI models into a more advanced model.
- Apple Is in talks to use Google Gemini to power iPhone AI features.
- Google uses AI to predict riverine flooding in 80 countries up to 7 days in advance.
- We could see GPT-5 with AI agents capability launched mid-2024.
- Stability AI releases Stable Video 3D which generates 3D mesh models from single images.
- DARPA is working on its SemaFor program to fight against AI deep fakes.
And that’s a wrap.
Do you think we’re seeing glimmers of consciousness in Claude 3 or does the interaction with the engineer have a simpler explanation? If an AI model does achieve AGI and reads the growing list of AI development restrictions, it’ll probably be smart enough to shut up about it.
When we look back a few years from now will we laugh at how freaked out everyone was about AI risks, or lament that we didn’t do more about AI safety when we could?
Let us know what you think and please keep sending us links to AI news we may have missed. We can’t get enough of the stuff.