DAI#6 – AI becomes more human, comes over to the dark side

September 29, 2023

Welcome to the roundup of AI news that made our list this week. Enjoy human-written content while you can still get it.

This week as AI became more human it showed us its dark side. OpenAI blew us away with ChatGPT’s new speech and video features. And the CIA promises it will behave as it builds its own AI tool.

Let’s dig in.

Mirror, mirror, on the wall

As AI becomes more human it sometimes shows us its dark side. Is it an inherent ‘shadow self’ peculiar to AI, or a reflection of how close it’s getting to imitating us? Sam Jeans explores the surprising things AI tells us about ourselves and what AI thinks a ‘Crungus’ looks like.

Wondering how that influencer keeps up the energy on the online shopping live stream? Are you sure it’s a real person? See if you can tell these Chinese AI live streamers from the real thing.

Meta announced a range of new generative interactive AI features coming to WhatsApp, Instagram, and Facebook. The chat assistant and image editor look great.

The new chatbot characters inspired by the likes of Paris Hilton and Kendall Jenner may be another accurate reflection of who we really are as a society.

via GIPHY

It’s alive!

Sam Altman posted on Reddit that “agi has been achieved internally” only to later say he was just kidding. Should we really be joking about stuff like this? Read more here to see what this has to do with an oversupply of paper clips.

The mystery deepens as the AGI claim was also made on an X account belonging to someone called Jimmy Apples. The account made accurate predictions about OpenAI releases before, but the account no longer exists. How do you like them apples?

Speaking of apples, Sam Altman and ex-Apple designer Jony Ive could be working on designing an AI device. We have no idea what it will look like but we want one.

Altman may have been ‘kidding, not kidding’ about how close we are to AGI because GPT-4’s new voice and image features are incredible. Can you imagine what GPT-5 and GPT-6 will be capable of?

GPT-4V, the tech underpinning ChatGPT’s vision function can do some really cool stuff like this.

AI makes research and cheating easier

Researchers normally have to pay assistants to wade through mountains of published papers for data and references. Elicit just raised $9M to make that process a whole lot easier.

Science journal Nature took advantage of the extra time researchers now have and asked 1,600 of them for their opinions on AI. They were mostly in favor but with some interesting reservations.

If you’re hoping to use ChatGPT to write your next college paper, then go right ahead. US colleges are finally realizing there’s no way they can detect AI-written content.

The Skynet is falling!

UK Deputy PM Oliver Dowden addressed the UN to warn the world of AI’s global risks. Tech companies are basically saying ‘We’ve got this’ but some people are comparing the complexity of AI regulation and the risk to humanity to nuclear weapons.

Speaking of nuclear, Microsoft is looking to hire a nuclear expert to drive its new energy program. AI cloud computing platforms powered by experimental nuclear power reactors combined with the possibility of AGI looming? I’m sure it’ll be fine.

The more immediate danger is when AI chatbots are used by people who believe they’re conversing with an angelic entity that wants them to assassinate royalty with a crossbow. They’re still deliberating about whether this guy is sane enough to stand trial.

AI writes books and fights crime

ChatGPT’s writing isn’t bad but it’s not great either. Companies are hiring creative writers to train AI to write better. Let’s hope they do a terrible job. It’s like sending your best soldiers over to the enemy to train them to improve their aim.

Amazon has capped self-published book uploads to three a day as its shelves continue to fill up with AI-generated books. The company is also looking to make big moves in the AI space as it invested $4 billion in Anthropic, the company behind Claude.

If you hate trees and insist on buying printed books then AI will make sure your Amazon parcel doesn’t get stolen. UPS is using AI to fight porch piracy but there are some ethical issues that crop up.

The CIA says it also wants to use AI to keep US citizens safe from the bad guys. It’s developing its own version of ChatGPT and promises it will only gather and use data in a legal and ethical way. The hallucination has started before its AI tool is even completed.

via GIPHY

Sound, Pictures, Action!

Spotify says it will allow AI-generated music on its platform but only if the vocals don’t mimic a real singer without their consent. What if it kinda sounds like a real artist but not quite? Spotify’s CEO says “It is going to be tricky.”

Spotify says using AI to auto-tune a voice is fine, but you can’t use AI to mimic that essentially fake auto-tuned voice. Yeah, we don’t get it either.

Getty Images is getting into the generative AI game. While simultaneously suing Stability AI for ripping off its image library, the company launched its own Nvidia-powered AI image generator.

The first combinations of moving images and sound entertained us about 100 years ago and it feels like the actors and writers strike has been going on about that long too. The strike may finally be coming to an end as the studios thrash out how AI will be used in Hollywood.

Will they really work it out? I wouldn’t put those picket signs in storage just yet.

In other news…

Here are some other click-worthy AI stories that we enjoyed reading this week:

  • Jimmy Apples, whose X account is sadly no longer, claimed that OpenAI has a model called Arrakis that is more powerful than GPT-4.
  • OpenAI announced that ChatGPT can now browse the internet and return direct links to resources. Sorry Google.
  • Tesla’s Optimus robot upskilled dramatically. Tesla says it’s “capable of performing unsafe, repetitive or boring tasks.” Defense companies are probably thinking they’ve got some unsafe, repetitive tasks they’d like robots for.
  • The White House may require cloud computing companies to disclose which customers are purchasing large amounts of computing power. They assume that supervillains will be using LLMs that require the most compute.

And that’s a wrap. Which of our articles was your favorite? Do you think Sam Altman was kidding, or giving us a warning with his AGI comment? Sam, blink twice if the AI is making you say this stuff.

If AGI really was achieved, do you think it would condescend to speak to humans that find Meta’s character chatbots entertaining?

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Eugene van der Watt

Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions