Welcome to our roundup of this week’s freshest AI news.
This week, AI revealed it thinks firing nukes is a good strategy.
The fightback against AI fakes continues.
And AI might be causing or curing carbon emissions. Or both.
Let’s dig in.
AI wants to press the button
AI is making its way into all kinds of defense applications, from weapons to battlefield management. Is that a good idea? A new study shows that AI models are unpredictable in wargame scenarios and surprisingly quick to press the nuke button.
If the AI regulators want to keep us safe they should probably move a little quicker. The UK published its initial white paper on AI regulation almost a year ago.
The next iteration, which includes the results of consultations, has finally been published. It’s interesting to see how different the UK’s approach is to the EU AI Act.
If an AI model does eventually pull the trigger, or enforceable AI legislation ever gets passed, it’ll definitely make the news. Microsoft is teaming up with Semafor to use AI to find newsworthy stories for its reporters to write about.
Soon AI may be making the news and writing it.
Last week AI fakes dominated the headlines and this week saw companies and authorities scrambling to fight back.
US lawmakers never miss an opportunity for a good acronym. They proposed the Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) bill in response to the AI nudes of Taylor Swift.
You’d think non-consensual porn was a clearcut issue, but questions over free speech, artistic expression, and what constitutes “realistic” muddy the legal waters a little.
Meta upped the ante in tackling AI deep fakes with a commitment to labeling AI-generated content on its platforms.
The lag between the posting of a fake image and its detection and removal is still an issue. Meta also admits it doesn’t have the means to detect fake audio and video reliably yet.
OpenAI says it’s trying to do its bit to battle fake images. The company is beginning to add C2PA metadata to images created by DALL-E 3 so you can tell if they’re AI-generated. It’s not likely to be very effective, considering how easy it is to get rid of that digital “watermark”.
Meanwhile, at Google…
Finally, all images generated within Bard, ImageFX and SGE will use SynthID: our cutting-edge toolkit for watermarking AI-created content.
— Google DeepMind (@GoogleDeepMind) February 1, 2024
If you were in a Zoom meeting with your boss, could you tell if it was really him or an AI fake? Don’t be so sure. This elaborate scam saw an employee lose $25.6m of the company’s cash because he thought he was talking to his boss and colleagues in a video call.
Data centers use a huge amount of energy. As more people use AI tools, power demands are soaring. Sam helped us make sense of the numbers to see if this is sustainable.
And then, just as we were feeling guilty about the carbon footprint of our AI-generated cat pics, a new report claims that AI energy usage and carbon emission stats may be overblown.
I’m guessing the truth lies somewhere in between the climate disaster and the more laissez-faire opinions.
A team of researchers from New York University may have found a way to train AI models more efficiently. They developed an AI system that learns from footage captured from a child’s perspective.
Babies wearing headcams to train AI models. What a time to be alive.
Making and breaking pictures
It looks like Bard is about to be rebranded as Gemini, with Google now powering the chatbot with its Gemini Pro model. The upgrade also sees the free tool get image generation, which is a luxury ChatGPT users have to pay for.
Those AI image generators are going to be harder to train though. Nightshade registered more than 250,000 downloads within days of the release of the image data poisoning tool.
Students participating in the Vesuvius challenge put AI’s image processing to good use. They revealed passages of a charred ancient Greek scroll using machine learning. Now we know more about the Epicureans’ favorite color and what they thought about capers.
AI is going to supercharge the way we shop online. Amazon is rolling out Rufus, a generative AI shopping assistant that will help you shift more money from your account to the one Jeff Bezos has.
If you haven’t already maxed your card limit yet, Mastercard will make it a little harder for fraudsters to transact on your behalf. The credit card company created its own generative AI model to fight credit card fraud.
If you want to keep up with the latest in real-world applications of AI in industry then there are some great conferences coming up in the next few weeks.
CDAO Financial Services 2024 will explore how cutting-edge technology like AI is impacting data and analytics within the financial services industry.
The Travel Trends AI Summit 2024 explores the rising influence of AI on the travel and tourism industries.
The upcoming Generative AI for Automotive Summit 2024 will cover how companies like Toyota, BMW, and Bugatti are using generative AI in automotive design and process optimization.
CFO StraTech 2024 takes a look at financial strategy, resource allocation, and how CFOs are leveraging cutting-edge technology like AI to guide corporate strategy.
The Post-Industrial Summit 2024 brings together a wealth of businesses and speakers from the likes of AWS, Deloitte, and Oracle to address the evolving role of AI in business transformation.
In other news…
Here are some other clickworthy AI stories we enjoyed this week:
- TikTok is experimenting with using AI to allow users to replicate a person’s voice in real time.
- Take a look inside the underground site where neural networks churn out fake IDs.
- New Senate bill aims to kill the proposed SEC rule on AI conflicts of interest.
- Amazon’s AWS cloud computing boss likens generative AI hype to the Dotcom bubble.
- Amazon uses a diffusion model to allow you to try on clothes or place an e-commerce item in any setting virtually.
Amazon presents Diffuse to Choose
Enriching Image Conditioned Inpainting in Latent Diffusion Models for Virtual Try-All
paper page: https://t.co/tCqziYU45B
Diffuse to Choose (DTC) allows users to virtually place any e-commerce item in any setting, ensuring detailed,… pic.twitter.com/s6lb2uPzy6
— AK (@_akhaliq) January 26, 2024
And that’s a wrap.
Are you getting any better at spotting AI fakes with all the practice we’ve been getting lately? It’s unlikely legislation or watermarks are going to be much help any time soon.
AI and nukes, together at last. Do you think we’ll be safer with a robot in charge of the button instead of whichever old guy is president of a nuclear power at the time?
Have you tried out the upgraded Bard / Gemini yet? Is it good enough to have you cancel your ChatGPT Plus subscription? It’s pretty good but I’m holding out for GPT-5. Come on Sam. Press the button already!
Let us know which of our news stories you enjoyed and send us links to ones we may have missed.