This week AI helped us get a better look into the universe but also had us reaching for a magnifying glass to read the fine print in updated terms of service agreements.
Corporations are freaking out about their staff using ChatGPT and it’s got Microsoft acting weird. There have also been some surprising developments on the chip development front and AI applications in medicine.
Here’s a wrap of this week’s AI news.
To boldly go where no AI has gone before
Astronomers have come some way since the Babylonians mapped the stars on clay tablets. Galileo’s telescope made it possible to begin to see how much they’d missed but AI is taking space exploration to a new level.
Even Elon Musk is getting all existential with xAI’s stated aims of answering “fundamental questions” concerning “reality” and “the universe.” Is the answer to life, the universe, and everything 42 or 420, Elon?
Coming a little closer to earth, Korean engineers have given an AI upgrade to their robot pilot called Pibot. The humanoid robot uses ChatGPT to learn how to fly a plane and then flips the switches and presses the right buttons faster and more accurately than a human pilot can.
And then, from the stars in the sky, AI has ventured all the way down to the little asterisk at the end of the fine print in software service agreements. Microsoft has joined the likes of Zoom and others by adding sneaky AI terms to their T’s & C’s.
AI gets creative and it’s starting to get weird
Generative AI is changing the way artists create their work. People that never lifted a paintbrush in their lives are now churning out AI art. It turns out that even Snapchat’s AI wants in on the action.
The Snapchat chatbot posted a weird image to users’ stories and then got all evasive when asked about it. Was this a glitch or was the AI feeling a little left out and looking for friends?
The 1-second story posted by the Snapchat AI chatbot: pic.twitter.com/PLuanlkQiw
— Pop Base (@PopBase) August 16, 2023
Politicians are no strangers to being creative when making election promises or slandering their opponents. US regulators are trying to get ahead of the deep fake curve so that they don’t get carried away with their Midjourney experiments in the upcoming elections.
Perhaps competing parties should just be allowed to make obviously fake stuff. Like a video of Trump trailing toilet paper from his shoe going into Airforce One. Or maybe a video of Biden not being able to find his way off a stage.
Is that an AI in your pants?
Engineers in London have designed AI-powered trousers that you can rent for £5,000 per month. It sounds steep but the AI pants have made a huge difference in the life of people like Julie Lloyd. I wonder if their programmers call their software “zip code”.
If you feel psychologically triggered by that terrible pun and want to talk about your feelings then Google’s Deep Mind has an AI app for that. While Google is careful not to call it a therapist, its new chatbot is intended to be an emotionally sensitive personal life coach.
In other AI medical news, the NHS started a trial of using AI technology for radiotherapy treatment. Normally a radiographer would manually outline the tumors identified in CT scans before they get zapped. It turns out that’s just one more thing AI can do a lot faster than humans can.
We don’t need no education
Teachers are scrambling to find ways to ChatGPT-proof their curriculum. When students got their hands on calculators decades ago it was tough to convince them of the need to learn long division.
Now, with ChatGPT, educators are having a tough time evaluating student assignments. They’re having to figure out if they did a great job teaching their students or if the student is just really good at using ChatGPT.
While some are fighting hard to detect and eliminate AI in education, others are embracing the positive changes AI brings to the classroom.
Generative AI can be dangerous. The Defense Force says “Hold my beer”
The US Defense Force has set up a task force to help the military harness the power of AI “in a responsible and strategic manner.” Replace the word “AI” with “mustard gas” and read that sentence again to see how ridiculous that sounds.
At least they’re pretending to be responsible with the prospect of AI as a weapon.
There are some sectors of society that are trying to make sure AI doesn’t kill us all quite so quickly.
The White House sponsored the DEFCON hacker event where contestants were asked to test the most popular AI models to see if they could break them. It didn’t go well.
One of the models tested at the event was Stability AI’s Stable Beluga 2 model. It’s the LLM powering its experimental Stable Chat product.
We can’t vouch for how “safe” it is but it’s free and connected to the internet so it may be a great alternative to ChatGPT.
Are Microsoft and OpenAI no longer BFFs?
Microsoft has plowed about $10B into OpenAI and apparently feels that picking up the cheque gives it some seniority in the relationship. The partnership started off well but it seems like there may be trouble in paradise as corporates become twitchy over ChatGPT risks.
This week Microsoft (kinda) released a competing product and trash-talked OpenAI. Sam Altman was like, ‘Bro, you take that back!’ To which Microsoft said ‘I didn’t do anything,’ and its new product and comments promptly disappeared. It’s a weird story.
Gulf nations tell chip manufacturers: “Take my money!”
While China and the US continue to fight over AI chips, institutions in countries like Saudi Arabia and the UAE have started to splash the cash too.
The amount of money they’re spending is mental and they’ve developed some impressive LLMs of their own too.
Nvidea is still the flavor of the month as far as high-end AI GPUs go but IBM has been working on some interesting tech too. Its new analog “brain” chip is designed to work in a way similar to a human brain and is claimed to be 15 times more energy efficient than comparable digital chips.
In other news…
Here are a few other AI news stories that got our attention this week:
- AI-authored books continue topping the Amazon best-seller lists including this one about the Maui fires published while the smoke was still rising.
- OpenAI could go bankrupt by 2024 as it spends $700,000 a day to keep ChatGPT going.
- OpenAI says it will deploy GPT-4 for content moderation.
- An interesting timeline of the development of AI.
And that’s it for this week’s roundup. Did you get an image posted by AI in your Snapchat story? We’d love to hear how your interaction with the chatbot went.
Also, would you consider flying in a plane piloted by an AI robot? Would you pay a premium for that or would you expect a discount? I’m not sure which of those options I’m more comfortable with yet.
We had a bumper crop of stories on DailyAI this week and couldn’t possibly fit them all into this roundup so check out our home page for more.