DAI#2 – AI tries to kill us with mushrooms and Trump raps

September 1, 2023

This week in AI we found that people still don’t trust AI but can’t get enough of the stuff. AI is probably going to kill us, but only if we read some of the books it writes.

A Donald Trump rap track nearly topped the iTunes chart, and the White House may be arranging that Musk v Zuckerberg cage fight we’ve all been waiting for. Let’s dig in.

We like AI, we just don’t trust it

A recent study shows that while we like the idea of AI, we just don’t trust it. When it comes to things like police surveillance or drones we’re a little conflicted. It’s interesting to see which demographics are most skeptical of AI.

If we’re honest, a lot of the conflict we feel about AI is self-created. We want AI to be transparent and safe for us, but inscrutable and dangerous for our adversaries. When AI is deployed on the battlefield and things go wrong, who do we blame?

Eat this, it’s perfectly safe

A lot of our trust issues with AI come from people churning out garbage using generative AI tools. Amazon is awash with AI-authored books pretending to be written by humans.

If you’re lucky you may just end up disappointed by how AI thinks a ‘50 Shades’ novel should read. But if you happen to pick the wrong book, here’s how AI could kill you with mushrooms.

Fake it until they make it

Deepfakes are getting easier to make and harder to spot. Google is trialing a digital watermark to identify AI-generated images. It’s got some neat tricks to stop fakers but will it be enough to stop this kind of disinformation?

Some deepfakes are obviously intended as satire. Like this parody of ‘Donald Trump’ rapping about his brush with the law which reached number 2 in the iTunes charts this week.

The White House is trying to tame AI

US authorities are trying desperately to put the AI genie back in the bottle before it causes too much trouble. The White House recruited hackers to get LLMs to say terrible biased stuff so they can be fixed.

But you don’t have to be a hacker to get ChatGPT to help you break the law. If you want to know how to make poison or drugs then the ChatGPT jailbreaks are freely available.

Chuck Schumer, who at 72 is one of the younger people in the US government, announced the date of his first AI Insight Forum where they’ll tackle AI regulation.

Chuck said, “We need the best of the best sitting at the table.” So we’re guessing they won’t ask Kamala Harris to give us a repeat performance of her explanation of the two words AI stands for.

There will be some big names in attendance, including Sam Altman, Mark Zuckerberg and Elon Musk. With this much ego in the room will we get everyone to sing Kumbaya around the AI campfire or will we get the cage fight we’ve been patiently waiting for?

You can’t spell China without AI

Besides domestic concerns over AI, the US is extremely concerned about the threat China poses as it makes progress with its own AI technology. 

Biden banned Nvidia from selling its high-end chips to China and now it’s banned it from selling to some countries in the Middle East too. The US is concerned that China is getting too friendly with tech companies in Saudi Arabia and the UAE.

China’s new AI laws kicked in two weeks ago. This week saw the first generative AI tools receive approval from its Ministry of Truth or whatever they call their department that handles these things. 

So can we grab your data or not?

AI companies in the UK have been on a bit of an emotional rollercoaster ride lately. Previously English authorities said copyrighted music, art, and literature were all fair game for AI training.

Artists voiced their concern over AI learning to be creative from their work and ultimately going into competition with them. Now the English government may be changing its mind.

There’s no need to worry about the robots, yet.

While ChatGPT says ‘Hey, look at all the cool stuff I can do,’ human brains are like ‘Do you even lift bro?’ Because our brains are plugged into our environment on so many levels we’ve still got a huge advantage over AI.

Scientists are working on new architectures and ways to free AI from the inside of computers and to start thinking like human brains. 

For these new kinds of intelligence to learn at scale they’ll need to interact a lot more with their environment. MIT researchers have developed a technique to enhance robotic object manipulation with minimal computational resources. 

Let’s hope that once robots begin to move and think like us, they behave better than we do. For now, it seems our brains will help us outsmart the robot attack for a little while yet.


Meta gets its geek on

Meta continues to dish out its latest tools for free and this week it was the programmers that got all excited. The company released Code Llama, its AI tool that takes natural language and turns it into code. 

Doesn’t ChatGPT already do that? Kinda, but Code Llama is free and has a few tricks up its sleeve

Have you ever squinted at a maths or science research paper and tried to understand what those weird squiggles mean? Well, then you’ll have some sympathy for AI models trying to make sense of them. 

Meta’s Nougat now makes scientific texts machine-readable which is something that other AI’s are really bad at as it turns out.

In non-Meta science news, engineers have taught an LLM to speak “chemistry”. In the pursuit of new materials with useful properties, polyBERT understands the grammar and syntax of atoms. And it works 100 times faster than human scientists.

Some good news for OpenAI for a change

OpenAI has had some negative press lately with its on-again off-again Microsoft relationship but this week it featured in some positive news too.

OpenAI partnered with Scale to fine-tune GPT-3.5 Turbo, making its paid model a very attractive prospect even with all the free models on offer from its competitors.

The company also announced an enterprise-friendly version of ChatGPT. The performance and upgraded security and privacy features make it an easier sell to ChatGPT skeptics.

Speaking of skeptics, the list of websites blocking GPTBot, OpenAI’s web scraper, continues to grow. We were a little surprised by which websites blocked it and which haven’t yet done so.

OpenAI is still fighting copyright lawsuits with authors of books that it ‘may or may not’ have scraped but ‘definitely probably’ did. The company’s motion to dismiss makes some good arguments and highlights how crazy the claims are that the authors are making

In other news…

  • It was a bit of a slow week for AI medical news but scientists did find the time to teach AI to help treat brain tumors
  • The people who brought us Deliveroo got funding from Google to create Jitty, an AI that will find the perfect home for you.
  • There are rumors that Meta is planning to take on GPT-4 with Llama 3 but they’re going to charge for it. Just kidding, it’ll be free like everything else they’re making lately.
  • A new AI image generator, Ideogram, has emerged on the scene as a free potential alternative to Midjourney. It’s already better at adding text to images than Midjourney is. 
  • Google released Duet, its answer to Microsoft’s Co-Pilot, and it’s charging the same price for it. Coincidence?

And that’s a wrap of this week’s AI news. Which story was your favorite? The one about how AI isn’t even close to how our brains work made me feel a little smug. Also, I may have listened to the Trump AI rap more than once.

Would you trust an AI if it told you that the mushroom you found in the forest was safe to eat? If we don’t hear back from you then we’ll assume Amazon hasn’t pulled all of those books from its shelves just yet.

Join The Future


Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Eugene van der Watt

Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.


Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions