Runway unveils its hyperrealistic Gen 3 Alpha T2V generator

June 18, 2024

  • Generative AI company Runway unveiled its latest text-to-video generator called Gen 3 Alpha
  • Gen 3 Alpha generates hyperrealistic video with smooth motion and fine-grained video controls
  • The tool takes just 90 seconds to generate a clip of up to 10 seconds long

Runway unveiled its latest text-to-video (T2V) generator, called Gen 3 Alpha, and the demos hint that this could be the best AI video generator yet.

OpenAI’s Sora wowed us a few months ago but there’s still no word on when (or if) it will be released. Runway already allows free and paid access to its previous generation Gen 2 T2V tool.

Gen 2 makes some decent videos, but it’s a little hit-or-miss and often generates weird anatomy or clunky movements when generating people.

Gen 3 Alpha delivers hyperrealistic video with smooth motion and coherent human models.

Runway says, “Gen-3 Alpha excels at generating expressive human characters with a wide range of actions, gestures, and emotions, unlocking new storytelling opportunities.”

The improved fidelity comes with a speed upgrade too, with the maximum length 10-second clips generated in just 90 seconds. The 10-second clip limit is the same as Sora, twice that of Luma, and three times that of Runway’s Gen 2.

Besides the improved human representations, the accurate physics of the videos is truly impressive.

Runway says Gen 3 Alpha will power improved control modes that allow a user to select specific elements to have motion and detailed camera movement controls with “upcoming tools for more fine-grained control over structure, style, and motion.”

The degree of camera control gives you an idea of how close we are to the end of traditional movie production.

OpenAI previously hinted that alignment concerns are one of the reasons it hasn’t released Sora yet. Runway says Gen 3 Alpha comes with a new set of safeguards and C2PA which allows the provenance of generated video to be tracked.

General world models

The idea of turning text into videos will appeal to most users, but Runway says Gen 3 Alpha represents a step towards a different goal.

Runway says, “We believe the next major advancement in AI will come from systems that understand the visual world and its dynamics, which is why we’re starting a new long-term research effort around what we call general world models.”

Training an embodied AI to navigate and interact with an environment is a lot faster and cheaper when simulated. For the simulation to be useful it needs to accurately represent the physics and motion of real-world environments.

Runway says these general world models “need to capture not just the dynamics of the world, but the dynamics of its inhabitants, which involves also building realistic models of human behavior.”

The coherent motion, physics, human features, and emotions in the Gen 3 demo videos are evidence of a big step towards making this possible.

OpenAI has almost certainly been working on an upgraded Sora, but with Runway’s Gen 3 Alpha, the race for best AI video generator just got a lot more competitive.

There’s no word on when Gen 3 Alpha will be released but you can see more demos here or experiment with Gen 2 here for now.

Join The Future


SUBSCRIBE TODAY

Clear, concise, comprehensive. Get a grip on AI developments with DailyAI

Eugene van der Watt

Eugene comes from an electronic engineering background and loves all things tech. When he takes a break from consuming AI news you'll find him at the snooker table.

×

FREE PDF EXCLUSIVE
Stay Ahead with DailyAI

Sign up for our weekly newsletter and receive exclusive access to DailyAI's Latest eBook: 'Mastering AI Tools: Your 2024 Guide to Enhanced Productivity'.

*By subscribing to our newsletter you accept our Privacy Policy and our Terms and Conditions