Welcome to this week’s roundup of handcrafted AI news.
This week AI eavesdropped on elephants.
Big Tech companies pulled the plug on their AI plans.
And Big Brother got an AI boost to keep a closer eye on you.
Let’s dig in.
AI hits the brakes
After severe backlash from security experts, Microsoft decided not to ship its controversial Recall feature with its new Copilot+ PCs. A top exec assured Congress the company now prioritizes security over AI.
Meta also did a U-turn this week, as it backed down on using EU social media data to train its AI. In a tit-for-tat response, Meta basically told EU users, ‘If we can’t use your data, then you don’t get Meta AI.’ Crybabies.
AI companies act surprised when users and regulators question their handling of AI risks. Are the doomsayers overreacting?
Maybe. But research headed by Anthropic showed how AI models can develop an emergent tendency to cheat, lie, and game the system for rewards.
AI Dr. Doolittle
Researchers used AI to explore how elephants communicate and found that they use names when addressing each other much like people do. This and related research raises some interesting questions.
Could we use AI to communicate with animals? Should we? There are surprising arguments on both sides of the ethical debate.
Humans could do with some help to communicate with each other. If you work in the service industry then AI could help you deal with your next irate customer.
A Japanese company has developed an AI ‘emotion canceling’ solution to help call center operators weather angry callers.
Watching AI watch you
Will AI make us safer? AI-powered cameras have sparked privacy concerns as they keep popping up in more public spaces to watch over us.
Some of the things Amazon’s Rekognition system detects at UK train stations seem more than a little creepy.
In a move that seems straight out of a Big Brother textbook, OpenAI has appointed former NSA head Paul Nakasone to its board.
The guy who pushed for the right to spy on people and asked for tech companies to help the NSA do that is now on the board of the biggest AI company. Seems legit.
The revolving door at OpenAI saw co-founder Ilya Sutskever leave the company last month.
Sutskever believes AI superintelligence is within reach and started a new company this week to create it safely. Something he didn’t think OpenAI was capable of, or willing to do.
How do we make this work?
An IMF report says there’s good news and bad news about AI and your job. AI has significant potential to propel productivity, but could also lead to massive job losses.
The report offers interesting insights into who’s most at risk and what governments need to do to cushion the blow.
Pope Francis addressed world leaders on AI ethics at the G7 event in Italy. He had some strong views on how to blance the benefits of AI with societal ethics.
The short version of his speech is: ‘Don’t let AI make all of our decisions and ban killer robots.’ That sounds like a sensible start.
AI video gets better and worse
Text-to-video (T2V) generators have been a great barometer for AI advancement. They’ve given us a visual representation of how far AI has come.
Just over a year ago, we had the horrific AI-generated video of Will Smith eating spaghetti. Last week we saw demos of Luma and Kling and the comparison is rediculous.
The exponential continues.
Scaling laws have held through *15* orders of magnitude…
…yet people continue to be surprised, due to Exponential Slope Blindness https://t.co/IbogcBYspQ pic.twitter.com/TSnjxRKlI1
— AI Notkilleveryoneism Memes ⏸️ (@AISafetyMemes) June 10, 2024
This week Runway unveiled its hyperrealistic Gen 3 Alpha T2V generator and the demos are even better than those tools. The physics and camera angle control look amazing.
As generative AI improves, some people will inevitably use it to make sketchy content. Australian authorities are investigating a deep fake incident at a Melbourne school as deep fake incidents targeting young children have become more common.
AI doc
AI is giving healthcare a shot in the arm with significant advances in helping doctors to diagnose patients.
OpenAI and Color Health partnered in a project to accelerate cancer treatment. A copilot tool powered by GPT-4o is helping doctors develop personalized cancer care plans at a pace that wouldn’t be possible otherwise.
Treating Parkinson’s disease remains a challenge but a new AI-powered blood test could help with earlier detection. A research team found it could predict Parkinson’s disease with up to 79% accuracy up to seven years before symptoms surface.
In other news…
Here are some other clickworthy AI stories we enjoyed this week:
- McDonald’s ends AI drive-thru trial after order mishaps.
- Generative AI models can outperform the experts who train them.
- A photographer took on the machines in an AI competition and won.
- New York Times union urges management to reconsider 9 art department job cuts as the paper ramps up its use of AI tools.
- Microsoft, OpenAI, and Nvidia are facing antimonopoly probes.
- China’s DeepSeek Coder becomes the first open-source coding model to beat GPT-4 Turbo.
- Google’s video-to-audio research uses video pixels and text prompts to generate rich soundtracks.
✍️ Prompt for audio: “A drummer on a stage at a concert surrounded by flashing lights and a cheering crowd.” pic.twitter.com/z0N8sbbsEU
— Google DeepMind (@GoogleDeepMind) June 17, 2024
And that’s a wrap.
Should we be using AI to communicate with animals? If we found a way to do that, the animals may have some choice words about what we’re doing to the environment.
I’m all for using AI to catch the bad guys, but a camera that knows when I’m having a bad day is a bit much. Will more smart cameras make our streets safer or is it an Orwellian step too far?
You can guess what OpenAI’s new board member would say.
The demo Gen 3 Alpha and Sora videos have been fun to watch, but could they stop teasing us and finally release one of these tools publicly now?
If you get your hands on a beta release, please share your creations with us and let us know who you had to bribe at OpenAI to make it happen.