Welcome to this week’s roundup of handmade AI news.
This week AI got a gym teacher arrested.
A mysterious chatbot appeared and then vanished.
And countries debated whether autonomous killer robots are a good idea.
Let’s dig in.
Mystery AI
This week a mysterious chatbot appeared seemingly out of nowhere on the LMSYS Chatbot Arena. No one seems to know who made “gpt2-chatbot”, but users reported performance that appeared to be better than GPT-4.
Is it a second-generation GPT-4 variant? Could it be a sneaky precursor to GPT-5? We don’t know.
The mystery deepens as gpt2-chatbot has since been taken down. Sam got to try it out before it disappeared and shared other users’ experiences of the impressive mystery chatbot.
Is OpenAI going to offer a search platform to take on Google? OpenAI’s recent SSL logs are hinting at this.
OpenAI‘s Coming with a Search Feature?
OpenAI‘s recent SSL certificate logs revealed something interesting: the domain (search-dot-chatgpt-dot-com) may indicate that OpenAI is developing a search functionality.
Sam Altman discussed AI and search on the Lex podcast.
The logs… pic.twitter.com/SKSgRVRiqP
— AshutoshShrivastava (@ai_for_success) April 28, 2024
Google’s DeepMind may have solved a different AI mystery. The argument over which text-to-image diffusion model is best can finally be settled.
The Gecko benchmark takes an interesting approach to identifying the best AI image generator.
Sorry, can’t make it
Should we make an effort to protect humanity from an AI-fueled extinction-level event? It sounds like a good idea, but there doesn’t seem to be any agreement on how and who should do this.
The first AI Safety Summit at Bletchley Park last year ended with a reassuring commitment from the international community to work together on AI safety.
The follow-up meeting is scheduled for later this month. When you see the big players who now say they won’t be attending, that commitment rings hollow.
Some AI model creators would love to be involved in AI safety efforts, but they don’t always get invited. The US Department of Homeland Security launched its AI safety board with several Big Tech appointees to the board.
Some of the biggest names in AI weren’t invited and it may have something to do with their open-model strategy.
Slaughter bots – yay or nay?
Austria hosted envoys from 143 countries at a 2-day event to discuss autonomous weapon systems (AWS).
The Austrian Foreign Minister Alexander Schallenberg said our generation is facing an ‘Oppenheimer moment’ as countries need to decide if and how to regulate the killer robot arms race.
Should we build fully autonomous AI robots, give them guns, and then see what happens? Or should we rewatch Terminator and Bladerunner as a sense check?
This is your principal speaking
If someone is recorded making offensive comments it’s easy for the person to dismiss them as AI fakes. In the case of a Baltimore school principal, it turns out that he really was the target of a little DIY AI fakery.
The school’s former gym teacher was arrested for making an AI clone of the principal’s voice. It took weeks to conclude that the racist clip was a fake and the principal temporarily lost his job.
When you listen to the audio it’s easy to see why people thought it was genuine.
False notes
Beethoven famously said, “Music can change the world.” AI music generators are certainly changing the world of music, but not in a good way.
People are using sophisticated text-to-music generators like Udio and Suno to create and upload entire albums of AI music to Spotify. And thousands of people are listening to the tracks.
Streaming companies and artists are trying to find a way to navigate the muddy waters of AI music. At least some folks are having a bit of fun with it before AI ruins music forever.
OMG… AI is gonna ruin us all… but this was funny!!! 🤣🤣🤣💀💀💀 Audio Up! pic.twitter.com/rmjb3xGApd
— GamER MD (@alexandertyler) April 27, 2024
AI using human artists’ creations for “inspiration” is part of the ongoing argument over what constitutes “fair use” of publicly available data.
OpenAI is still fighting legal battles over its wholesale grab of internet data to train its GPT models. The company is trying to get onto the straight and narrow with a deal it struck with The Financial Times over training data.
Talking AI
This week I got to have a very interesting discussion with Soheil Zabihi from TokenScope. TokenScope uses AI in its crypto transaction monitoring solution.
Soheil recently spoke on a panel at the Global Blockchain Show which ran concurrently with the Global AI Show in Dubai. He had some interesting insights on how Blockchain and AI will change the way we transact online.
In other news…
Here are some other clickworthy AI stories we enjoyed this week:
- Google DeepMind RecurrentGemma beats transformer models.
- CEO says generative AI could soon decimate the call center industry.
- DeepMind researchers discover impressive learning capabilities in long-context LLMs.
- Apple releases OpenELM, a slightly more accurate LLM that could be coming to your next iPhone.
- China unveils Vidu, a powerful text-to-video generator almost as good as Sora.
- InstantFamily takes multiple face photos and generates coherent images that retain ID.
And that’s a wrap.
Did you get to try out gpt2-chatbot before it got taken down? I’m betting the chatbot was a hint that OpenAI will release something big to upstage Google’s upcoming I/O event.
Do you think AI robots duking it out instead of human soldiers is the way to go? Or are we engineering humanity’s demise?
The AI music developments have made me a little wary. I’m terrified that I’ll stumble on an AI-generated piece of music that I really like. I’m hoping these tools don’t get better than they already are, but it seems inevitable.
Have you heard a track that proves the music Turing test has been passed? Send us a link, along with any interesting AI news stories we may have missed.