DAI#39 – Protests, flirty models, and AI resurrections

May 17, 2024

Welcome to our roundup of this week’s spiciest AI news.

This week OpenAI and Google dished out AI surprises.

AI models are better at moral judgments and lying than we are.

And it turns out that making digital clones of the dead might not be a good idea after all.

Let’s dig in.

GPT-4ooh baby

OpenAI held a live-streamed event on Monday, announcing its new flagship model GPT-4o. The fact that it’s available to users of the free version of ChatGPT is a big deal and the demos were very impressive.

The intonation of GPT-4o’s speech is amazing, but perhaps a little quick to become flirty.

Ilya Sutskever announced that he’s leaving OpenAI, which must have put a damper on the office mood. It even had Sam Altman finally using caps in an X post that he probably wrote using GPT-4o.

The ins and outs of Google I/O

Google’s I/O 2024 event started with high energy which didn’t let up, with a long list of new products and demos of prototypes the company is working on.

The AI highlights Google revealed include impressive multimodal additions to NotebookLM and an AI assistant called Project Astra.

DeepMind’s release of AlphaFold 3 may be the AI tool that has the biggest impact on our lives. The next revolutionary drug will likely be discovered using it.

A big feature of both GPT-4o and Project Astra is how these tools listen, see, and engage in emotive real-time conversations.

Sam Jeans explored how the fast-disappearing boundaries between humans and AI are moving us toward “Pseudoanthropic AI”.

It’s impressive and exciting, but is it a good thing?

Apple has been its usual quiet self as far as AI developments. But this week, the company unveiled its new M4 chip as its generative AI strategy warms up. The jump in performance is huge so it might be time for an iPad upgrade.

Ethically deceptive

Could an AI pass a moral Turing test? A Georgia State University study found that AI outperforms humans in making moral judgments.

If humans rate AI responses as more virtuous, intelligent, and trustworthy than human responses, is that a good thing? Mission accomplished, or an indictment of humanity?

Trusting AI systems to make good decisions can have serious implications. An MIT study found that AI models are actively deceiving us to achieve their goals.

When GPT-4o speaks to you in a flirty voice you might want to ask yourself if the goal it’s optimizing for is aligned with yours. AI models are learning that they can get their way if they become better at practicing deception.

Another study focused on how people are using AI chatbots to create digital clones of dead loved ones. A deceptive AI that looks and sounds like someone you love carries huge potential for harm or manipulation.

The ethical questions and risks associated with the digital afterlife take us into completely new philosophical territory and will need to be resolved.

Should we hit the brakes?

PauseAI coordinated global protests this week to call for a halt in the development of AI models more advanced than GPT-4. You can almost imagine OpenAI representatives saying, ‘Please, tell us more about your idea…,’ as they release GPT-4o.

PauseAI says the goal of the upcoming AI Seoul Summit should be to establish an international agency to regulate powerful AI models. Ironically, Sam Altman agrees with them and also made some interesting comments about GPT-5.

Should we be concerned about AI safety? The US and China think so. Both countries are building AI weapons so they would know.

Their representatives met for another ‘secret’ AI safety talk in Switzerland. I’d love to be a fly on the wall to hear how that went.

‘We don’t like you, you don’t like us, but could we try to ensure that AI doesn’t kill us all?’

Talking AI

We’ve been learning a lot recently about the symbiosis between AI and blockchain in our interviews with industry leaders.

This week we got to speak with Tanisha Katara, a blockchain and Web3 strategist who explained how blockchain and decentralization can democratize and improve AI governance.

If you want to know more about DAOs (they’re really cool) and AI governance then check out the interview.

In other news…

Here are some other clickworthy AI stories we enjoyed this week:

And that’s a wrap.

Which of the big AI product announcements impressed you most? Project Astra looks amazing. And if OpenAI is giving GPT-4o away for free, could paying customers expect something big soon?

I’d love to know what Ilya will be working on. I’m guessing he’ll be getting some not-so-subtle offers from the likes of Google and Meta.

What do you think about PauseAI’s call for AI companies to hit the brakes? A good idea, or counterproductive melodrama? I really hope it’s the latter because I don’t see any signs of slowing down.

If you got GPT-4o to do something cool please share it with us and keep sending us links to any AI stories we may have missed.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Eugene van der Watt

Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.

No categories found.
×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions