This week AI got better at solving some of our problems and also found the time to create some new problems for us too. From detecting cancer to enabling fraud it seems AI is playing both sides of the field.
It’s been so exciting we almost forgot that with the ongoing SAG-AFTRA vs AI strike, we haven’t had any of our regular TV shows for a while.
AI takes to the skies
If you’re worried that the government is using “chemtrails” to control your mind then you’ll be happy to hear that Google’s got an AI app for that.
The condensation trails, or contrails, are no threat to your tinfoil hat, but they’re not great for global warming. Google is using AI to come up with a clever way to fix this.
Commercial applications of AI have hogged the headlines but defense interest in AI is probably where most of the real spend is going.
The US Airforce successfully conducted a 3-hour unmanned test flight with AI piloting their experimental stealth plane. Imagine what a fighter jet could be designed to do if you don’t need to take pilot G-force limits into account. And without contrails, you won’t even see the jet coming.
Could you please stop stealing our stuff?
The argument over the ethics of AI data scraping to train models isn’t any closer to being resolved. Artists insist that copyright law should prevent their images from being used without consent but it turns out that it’s not so simple.
And while the jury is out on what is or isn’t fair game, the AI companies are grabbing what they can, while they can.
While battling lawsuits on one side, OpenAI discreetly let us know their GPTBot is out scraping public web data.
Google obviously shares OpenAI’s sentiments. It suggested that unless you expressly tell it not to, then it would assume you’re ok with your published data being used to train its AI.
Even Zoom has been feeling the heat. Zoom says it wants to use your usage data to train its AI, but only if you agree that it can. The fine print in its T’s & C’s seems to say that you give consent by simply using the tool. But Zoom pinky swears that it won’t. It’s all a little confusing.
AI images lie, but so do AI CEOs
A fake image of the British Prime Minister pouring beer poorly sparked a political fight and renewed calls for a way to identify AI disinformation. English politics is weird but they’ve got a point.
AI hasn’t cornered the market on lying though. In a case of life imitating art, Stability AI’s CEO Emad Mostaque has been caught telling some whoppers. The company is working on some exciting projects but it seems there’s trouble in paradise. It sounds like Stability’s business isn’t very stable at all.
Here’s a taste of Mostaque trying to walk back hinting that he was a secret agent:
Was puzzled by the “spy” thing & realised it was due to work on counter-extremism I did after being worried about ISIS in Dec 2013 (@BusinessInsider‘s chart of the year alas)
Wrote publicly in @WSJ etc about bits of this https://t.co/Xtr98EFqAu
Suppose folk didn’t get it
🤔🤷🏾♂️ https://t.co/EpXYWt8h6g
— Emad (@EMostaque) August 9, 2023
AI hunger for chips is insatiable
Data centers are snapping up high-end GPUs as fast as manufacturers can supply them but supply can’t keep up. The lack of chip supply is holding AI development back and manufacturers say the problem could persist for 2 to 3 years.
Nvidea announced an upgrade to its Grace Hopper Superchip that was only just released 2 months ago. The leap in performance in comparison to currently deployed technology is staggering.
AI fraud for fun and profit
Bad actors continue to rub their hands as they think about how AI will make it easier for them to grab cash out of your pockets.
The creator of FraudGPT says he’s got new versions of his generative fraud tool coming out soon.
One of his new AI fraud tools, DarkBERT, may actually have been created by a research group that was trying to prevent cybercrime. We wish we were making this up.
And just when you thought AI tracking couldn’t get any more intrusive, it turns out AI can tell what you’re typing by listening to your keyboard sounds over Zoom. That’s a big step up from your plain vanilla flavor keylogger malware.
AI is biased, we probably can’t fix it, and it’s your fault.
We like to think of AI as an unbiased piece of software that just spits out the facts, but it’s not true. AI models think all black people look the same and women shouldn’t be hired for the job.
AI companies are trying to fix this. Hopefully, the work Google is doing with their AI skin tone project will help AI not just see white people properly.
As hard as the engineers try, it may be impossible to make an unbiased AI. You may be unsurprised to hear that it’s not a computer glitch, it’s a human one.
Meanwhile in China
The AI race is on and the US is doing whatever it can to hold China back. Chinese big tech companies have been snapping up around $5bn of Nvidia chips to power their data centers as the US desperately pumps the brakes.
Citing national security concerns the US government has taken steps limiting certain US investments in high-tech sectors in China. It doesn’t sound like their previous efforts made much difference though.
The new Chinese AI regulations kick in next week and it seems like open-source models are going to be the only way to comply. Alibaba just released their new LLM as open-source in direct competition with Meta’s Llama 2.
Trust me, I’m a doctor
AI isn’t quite up to replacing your doctor yet but it is being used to do some really impressive things in the medical field. Detecting cancer is tricky but researchers have found a way to use machine learning to detect cancer in a single DNA molecule.
Combining AI with robotics has also made a huge impact on a woman that now considers herself 80% human and 20% robot. Cancer detection and bionic limbs? AI can do that. Affordable basic healthcare? We’re not quite there yet.
Are you not entertained?
Has the ongoing actors and screenwriters strike got you watching reruns of old shows? If you’re looking forward to some fresh content we hope you like Disney.
Recent AI job postings by Disney are a clear sign that it plans to continue its heritage of being at the front of the entertainment tech race. Who needs actors, animators, writers, etc… when AI can do it for you. Will it be any good though?
Are you tired of the music served up on your favorite streaming service? Have you wondered what it would sound like if Johnny Cash sang Barbie Girl? Well, thanks to Google and AI mashups you need wonder no more.
In other news
Here are some other AI bits and pieces that caught our eye:
- Stability AI announced the release of its generative coding solution called StableCode.
- Google said, hey we’ve got one of those too and released its full-stack AI code generator called Project IDX.
- The US government is running a competition to get people to break AI.
- An author says Amazon is selling books bearing her name but she didn’t write them, AI did.
And that’s it for this week’s roundup. If we missed a great AI story let us know.
And if you’re writing an email, making a Youtube video, or talking to a colleague in a Zoom meeting this week, please try to be extra nice. AI models are going to use all of your data for training so the AI will only be as nice as you are.