Welcome to this week’s roundup of artisanal handcrafted AI news.
This week we found out AI can’t help you build a bio-weapon after all, or maybe it can.
Tay Tay’s army of Swifties fought fake AI porn.
And AI made us trust politicians even less than we already did.
Let’s dig in.
Taylor Swift found herself the target of explicit AI deep fake images this week. The understandable outrage that ensued was also directed at platforms like X which seemed to have no defense against this kind of content.
There was widespread reaction from both industry and members of the public. Taylor’s army of Swifties went into full Sherlock Holmes mode. They tracked down and doxxed the guy allegedly behind the images after he confidently claimed, “They’ll never find me.”
It’s simply too easy for anyone to create these kinds of images using AI. With InstantID, it just got a lot easier. The model enables AI image generators to create reproductions from a single image of a person’s face.
The InstantID research paper was published before the Swiftgate drama, but guess whose face the researchers used as an example.
It’s official, we can’t believe our eyes and ears anymore. Sam’s compilation of the political deep fakes we’ve seen over the last few months highlights the progression in both scale and capability that AI is affording fraudsters.
AI clones of voices have improved dramatically. We’ve gone from robotic monotone attempts to full-scale imitation of tone and emotion. The fake George Carlin comedy video on YouTube was a case in point.
Carlin’s estate is now suing the creators of the AI fake comedy show with some surprising admissions by the people behind the video.
Deep fake audio is getting easier to make and harder to detect. The democratization of these AI tools means that the ordinary man on the street is finding himself a target. A Baltimore principal says the voice in an offensive audio recording wasn’t his but an AI fake. You be the judge.
My first instinct was, “He’s lying,” but then I saw what an audio forensic expert said about the clip.
Open and shut
What’s in a name? The “Open” in OpenAI doesn’t seem to mean what it once did. OpenAI’s drift from its namesake and founding principles makes for interesting reading.
The company claims it’s still transparent, as long as you don’t ask questions about its financial statements, model training data, conflict-of-interest policy, why they fired Altman,… You get the picture.
what 2024 is going to be like pic.twitter.com/bvgBFSAW4H
— kache (yacine) (KING OF DING) (@yacineMTB) January 31, 2024
Something OpenAI has been candid about is its ambitions to produce its own AI chips. Altman quietly jetted off to South Korea to chat with Samsung and some other chip manufacturers for help with this.
OpenAI’s opaque operations may be more in line with the secretive nature of leaders further north of Seoul.
A report that uncovered the dynamics of North Korea’s resurging AI industry shows that AI is playing a bigger role there than some may have thought. I’m guessing Kim Jong Un is a big fan of Meta’s open-source strategy.
The Biden administration now requires cloud companies to report foreign users. If you’re a computer scientist in North Korea, you may want to use a decent VPN when you connect to AWS.
GPT-4 gets brainy
If you ask GPT-4 to help you brainstorm it gets a little repetitive with some of the ideas it comes up with. Researchers came up with some clever prompt engineering strategies that can fix that.
If you’re looking for some creative entries to add to your resume, ChatGPT can help with that too. It turns out that AI is widely used by job applicants, and hiring managers encourage it.
This week we discovered something else that GPT-4 was good at. Researchers found that GPT-4 agreed with expert doctors on recommended treatments for stroke victims.
Patients suffering from paralysis or ALS could soon benefit from another one of Elon Musk’s plethora of projects. Musk announced that Neuralink completed its first brain implant in a human subject.
This could eventually allow for direct communication between the brain and devices like cell phones or computers. Are we living the prequel to The Matrix?
Musk has also been trying to raise $6 billion to take his AI project xAI to the next level. Take a look at what this man has done so far and then just give him the money. When does this guy sleep?
The RAND Corporation got a lot of criticism for an October report that said LLMs “might” help bad guys make a bioweapon. Their latest report says that might not be true after all. And then OpenAI did a study of their own that concluded that a special version of GPT-4 might help the bad guys a little.
A very real danger could come from AI agents let loose on the internet without supervision. Researchers outlined the potential dangers and proposed three things that could increase visibility into AI agents to make them safer.
Is your superpower pointing out other people’s mistakes? The CDAO and DoD are organizing events to identify bias in language models. They’ll even pay you bounties for spotting bias bugs.
AI in EU
The upcoming EU AI Act Summit 2024 kicks off next week. The summit will be an ideal opportunity to discuss AI regulation proposals and get to grips with the EU AI Act and its global implications.
Some civil rights groups are calling for the EU to probe OpenAI and Microsoft. The big chunk of cash Microsoft invested in OpenAI raises questions about the impact on competition within the AI sector.
It might be tough to argue against that, as Microsoft is expected to post its best quarterly revenue growth in two years. A lot of that comes off the back of AI developments that OpenAI helped it make.
Italy’s data protection authority has raised data privacy concerns over ChatGPT’s slipups with personal information and the consequences of libelous hallucinations.
In other news…
Here are some other clickworthy AI stories we enjoyed this week:
- The U.S. National Science Foundation launched the National Artificial Intelligence Research Resource (NAIRR) pilot program.
- AI companies will need to start reporting their safety tests to the US government. Not everyone agrees that this is a good idea.
“All your models are belong to us.”
This is the endgame of heavy lobbying efforts for regulatory capture by those seeking to sell “AI Safety as a service”.
Models should be permissionless for innovation to flourish. This is a form of neural net taxation and will be net decel. pic.twitter.com/wpP7zwMCv3
— Beff Jezos — e/acc ⏩ (@BasedBeffJezos) February 1, 2024
- An ex-board member is critical of the risk associated with OpenAI’s current board structure and the power it holds.
- Meta’s free Code Llama AI programming tool closes the gap with GPT-4.
- OpenAI and Common Sense Media partner to protect teens from AI harms and misuse.
And that’s a wrap.
To the Swifties in our audience, we hope you’ve recovered from the traumatic week. Were you browsing X when you spotted the AI pics unintentionally? Or did you have to work hard to find them online?
I don’t think anyone will be making fake nudes using my face, but I may be more careful who I send a voice note to in future. AI voice cloning is getting crazy.
Have you signed up for the Neuralink trial? Would you let Elon Musk put a chip in your brain? Musk managed to blow up a few SpaceX rockets before he got it right. I think I’ll wait until they’ve worked out the bugs.
Let us know what you think, and send us links to any juicy AI stories we may have missed.