DAI#30 – AI jets, trust issues, and extinction-level threats

March 15, 2024

Welcome to this week’s roundup of human-generated AI news.

This week, AI eroded our trust even though we can’t seem to get enough of it.

Actors, programmers, and fighter pilots might lose their jobs to AI.

And scientists pinky swear not to use AI to make bad proteins.

Let’s dig in.

Trust but verify

Societal trust in AI continues to decline even as the adoption of generative AI tools grows at a frenetic pace. Why are we so willing to adopt a technology in spite of our fears about how it will shape our future? What’s behind the distrust, and can it be fixed?

Sam’s exploration of the dissonance of generative AI’s growing distrust and rising usership helps us to take an honest look at our conflicted relationship with AI.

One of the reasons behind AI skepticism is the alarmist views trumpeted by some sectors of the industry. A report commissioned by the US government says AI poses an “extinction-level threat” to our species.

The report recommends banning open-source models even as proponents of open AI dismiss the report as bad science fear-mongering.

The AI news was a little light in the AI fakery department this week. Princess of Wales Kate Middleton was hit with a deep fake image controversy over her overenthusiastic edit of a photo of her and the kids.

The media’s outrage over an edited photo of a celebrity is a little hypocritical, but maybe it’s a good thing that society is becoming more sensitized to what’s real and what isn’t. Progress?

AI jobs counter-strike

The gaming industry has been quick to adopt AI but actors and voice artists aren’t happy with how things are going. SAG-AFTRA now says it’s “50-50 or more likely” to strike over video game negotiations.

Playing a flight simulator game may be the closest fighter pilots get to the real thing soon, as the prospect of AI replacing them becomes a reality. The Pentagon is planning to have the first of 1,000 AI-piloted mini ghost fighter jets built in the next few months.

Swarms of autonomous fighter jets armed with missiles and directed by an AI prone to hallucinations. What could possibly go wrong?

Stability AI CEO Emad Mostaque raised eyebrows when he said there would be no need for human programmers in the next few years. His bold claim looks increasingly likely to come true.

This week Cognition AI announced Devin, an autonomous AI software developer that can complete entire coding projects based on text prompts. Devin can even autonomously set up and fine-tune other AI models.

Perhaps Mostaque’s claim needs clarification. Soon, there will be no need for humans that can write code, but tools like Devin will enable anyone to be a programmer.

If you’re an out-of-work actor, fighter pilot, or programmer looking for a job in AI, then here are some of the best universities to study AI in 2024.

Safety first

AI tools like DeepMind’s AlphaFold have accelerated the design of new proteins. How do we make sure these tools aren’t used to make new proteins that could be used for malicious uses?

Researchers have created a set of voluntary safety rules for AI protein design and DNA synthesis, with some notable names signing up to support it.

One of the commitments is to only use DNA synthesis labs that check if the protein is dangerous before making it. Does that imply some labs don’t do that? Which labs do you think the bad guys are likely to use?

via GIPHY

A team of researchers developed a benchmark to measure how likely an LLM is to help a bad actor build a bomb or a bioweapon. Their new technique helps the AI model unlearn dangerous knowledge while retaining the good stuff. Well, almost.

Aligned models will politely decline your request for help to make a bomb. If you use ASCII art to make the naughty words and a clever prompt technique you can easily bypass these guardrails.

Heart-shaped AI

Researchers used AI to explore how genetics influences the morphology of a person’s heart. Creating 3D maps of the heart and linking it to genetics will be a significant help for cardiologists.

Mayo Clinic researchers develop “hypothesis-driven AI” for oncology. The new approach goes beyond simple analysis of big data by generating hypotheses that can then be validated based on domain knowledge.

This could be a big deal for testing medical hypotheses and predicting and explaining how patients will respond to cancer treatments.

In other news…

Here are some other clickworthy AI stories we enjoyed this week:

And that’s a wrap.

Do you trust AI more or less as it becomes a bigger part of your day-to-day life? A more skeptical approach is probably a safer bet, but the doomsayers are getting a little tiresome.

How do you feel about AI-piloted fighter jets taking over from human pilots? I’m looking forward to seeing the maneuverability of these machines but the idea of an AI glitch coupled with missiles is unnerving.

I’m a little disappointed that the only AI fake news we got this week was the royal “Jerseygate” but I guess we should see this as progress. I’m sure normal service will resume next week as the election hots up.

Let us know which AI developments caught your eye this week and please send us links to any exciting AI news or research we may have missed.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Eugene van der Watt

Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions