DAI#11 – Safety summits and mysterious deep sea AI platforms

November 3, 2023

Welcome to this week’s roundup of the finest collection of hand-crafted AI news.

This week world leaders drank a lot of tea and chatted about AI safety.

The President of the USA released an executive order on AI that may never go into effect.

And the future of unregulated AI development may be on the high seas.

Let’s dig in.

A little less conversation, a little more action, please

The UK AI Safety Summit kicked off this week with the release of the “Bletchley Declaration” which notably saw China join the list of signatories.

So far the summit seems like a lot of talk with little concrete policy. If you want a quick catch-up then read Sam’s great roundup of the first day’s proceedings.

There was some talk of potential catastrophic future risks, but Nick Clegg says we should be focusing on the risks AI presents right now.

The TL;DR version of the summit is: ‘Let’s all try to be safe while we play with AI. See you all in 6 months for another AI safety chat in South Korea.’

Apparently UK PM Rishi Sunak’s tech skills haven’t inspired a lot of people with confidence.

Is the Executive Order dead in the water?

After speculation and rumors in the buildup to the big day, the US government finally released President Biden’s executive order on AI. It’s a huge document and the breakdown of the executive order raises some interesting questions.

It’s also interesting to see what AI safety requirements were left out of the requirements. At least it’s a start. Now there’s just the small matter of getting the US Congress to all agree to pass it into legislation.

The average age in the Senate is 64, so don’t expect swift consensus on a 100-page document that mentions floating point computing figures and AI model weights.

If the AI regulations all get a bit much for you, then you could consider heading off to the high seas. Del Complex says its deep sea platform AI computing rigs could help you dodge AI oversight. Is their claim fact or fiction? You decide.

Big Tech steps up

As governments slow-walk AI safety, Big Tech AI companies are making faster progress in defining AI safety standards. MLCommons and some big names in AI will develop AI safety benchmarks so we can put a number to how safe different models are.

Could releasing LLM weights lead to the next pandemic? If you’re pro open source then this research will have you yelling ‘Lies!’ If you think making LLMs open source is a bad idea, you’ll smugly say, ‘I told you so.’

Are AI risks overhyped? Maybe. But just in case, OpenAI put together its “Preparedness” team to handle AI’s existential risks. The announcement comes as the company rebrands itself as an AGI-centric company.

The latest developments in making AI think and speak in a human-like way could be a sign that AGI isn’t too far off.


AI for me, but not for thee

The Western-centric nature of datasets used to train AI has led to a new wave of digital colonialism. The people doing a lot of the dirty work related to AI aren’t being properly served by the technology.

Things aren’t great in the Middle East at the moment and AI isn’t helping. Deep fakes have been used on both sides of the Israel-Palestine conflict with devastating consequences.

These developments have highlighted just how important credible reporting is. This week Leica unveiled its anti-AI camera to fight deepfakes. Will it be enough to convince people that they can trust their eyes after all?

It doesn’t add up

You’d think ChatGPT would be great at math, but it really isn’t. If you were hoping to let ChatGPT handle your accounting then these results may have you rethink that.

The accountants at Microsoft were a happy bunch as the company’s ​​first-quarter financial results surpassed analyst expectations. This AI-generated poll Microsoft slapped on a Guardian article did dampen the mood though.

Google apparently also sees a pot of gold at the end of the AI rainbow. It’s set to invest $2B in AI startup Anthropic to add to the $4B that Amazon already put in. Can we expect Claude 3 sometime soon?

ChatGPT just added a few new features to ChatGPT Plus. The new features are great but have put a dent in the operations of a lot of AI startups.

Copy that

The legal battles between AI companies and copyright owners continue. This week saw OpenAI smiling as the artists suing the company lost their first round in one lawsuit.

You have to wonder how some of the plaintiffs’ legal arguments made it beyond the first draft. And the reason why two of the three plaintiffs dropped out of the case is laughable.

The actors that make up the SAG-AFTRA actors’ guild are still concerned that AI will clone and replace them. The strike has been going on for more than 109 days now with little sign of letting up.

In other news…

We scan loads of AI headlines so you don’t have to. Here are a few more AI stories that were worth clicking on.

And that’s a wrap.

Do you feel safer knowing that world leaders will be meeting every 6 months to chat about AI safety? Elon Musk summed it up well when he said, “Hope for the best but prepare for the worst.”

Do you think Del Complex is for real, or just a very elaborate ploy to sell some merch? My head says ‘fake’ but my heart says, ‘Please build it!’

We’re still a little undecided over whether open source models will save or doom us. If you have some solid arguments either way then send them our way.

And, as always, if we missed a great story, or if you just want to say hi then we’d love to hear from you.


Join The Future


Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Eugene van der Watt

Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.


Stay Ahead with DailyAI


Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.


*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions