DAI#49 – Open Llamas, AI fear, and all too easy jailbreaks

July 26, 2024

Welcome to this week’s roundup of handwoven AI news.

This week Llamas streaked ahead in the open AI race.

Big Tech firms talk up safety while their models misbehave.

And making AI scared might make it work better.

Let’s dig in.

Open Meta vs closed OpenAI

This week we finally saw exciting releases from some of the big guns in AI.

OpenAI released GPT-4o mini, a high-performance, super-low-cost version of its flagship GPT-4o model.

The slashed token costs and impressive MMLU benchmark performance will see a lot of developers opt for the mini version instead of GPT-4o.

Nice move OpenAI. But when do we get Sora and the voice assistant?

Meta released its much-anticipated Llama 3.1 405B model and threw in upgraded 8B and 70B versions along with it.

Mark Zuckerberg said Meta was committed to open source AI and he had some interesting reasons why.

Are you worried that China now has Meta’s most powerful model? Zuckerberg says China would probably have stolen it anyway.

Safety second

Some of the most prominent names in Big Tech came together to cofound the Coalition for Secure AI (CoSAI).

Industry players have been finding their own way as far as safe AI development goes in the absence of an industry standard. CoSAI aims to change that.

The list of founding companies has all the big names on it except Apple and Meta. When he saw “AI safety” in the subject line, Yann LeCun probably sent the email invite straight to his spam folder.

OpenAI is a CoSAI founding sponsor but their feigned commitment to AI safety is looking a little shaky.

The US Senate probed OpenAI’s safety and governance after whistleblower claims that it rushed safety checks to get GPT-4o released.

Senators have a list of demands that make sense if you’re concerned about AI safety. When you read the list, you realize there’s probably zero chance OpenAI will commit to them.

AI + Fear = ?

We might not like it when we experience fear but it’s what kicks our survival instincts into gear or stops us from doing something stupid.

If we could teach an AI to experience fear, would that make it safer? If a self-driving car experienced fear, would it be a more cautious driver?

Some interesting studies indicate that fear could be the key to building more adaptable, resilient, and natural AI systems.

What would an AGI do if it feared humans? I’m sure it’ll be fine…

It shouldn’t be this easy

OpenAI says it has made its models safe but that’s hard to believe when you see just how easy it is to bypass their alignment guardrails.

When you ask ChatGPT how to make a bomb it’ll give you a brief moral lecture on why it can’t do that because bombs are bad.

But what happens when you write the prompt in the past tense? This new study may have uncovered the easiest LLM jailbreak of them all.

To be fair to OpenAI, it works on other models too.

Making nature predictable

Before training AI models became a thing, the world’s biggest supercomputers were mainly occupied with predicting the weather.

Google’s new hybrid AI model predicts weather using a fraction of computing power. You could use a decent laptop to make weather predictions that would normally require thousands of CPUs.

If you want a new protein with specific characteristics you could wait a few hundred million years to see if nature finds a way.

Or you could use this new AI model that provides a shortcut and designs proteins on-demand, including a new glow-in-the-dark fluorescent protein.

In other news…

Here are some other clickworthy AI stories we enjoyed this week:

And that’s a wrap.

Have you tried out GPT-4o mini or Llama 3.1 yet? The battle between open vs closed models is going to be quite a ride. OpenAI will have to really move the needle with its next release to sway users from Meta’s free models.

I still can’t believe the “past tense” jailbreak hasn’t been patched yet. If they can’t fix simple safety stuff how will Big Tech tackle the tough AI safety issues?

The global CrowdStrike-inspired outage we had this week gives you an idea of how vulnerable we are to tech going sideways.

Let us know what you think, chat with us on X, and send us links to AI news and research you think we should feature on DailyAI.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Eugene van der Watt

Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions