Highlights:

  • The business claims that Gen-3 Alpha can produce videos with greater fidelity compared to its previous generation of AI.
  • Videos made with Gen-3 Alpha will have information indicating that they were produced by AI added to them by the system.

Runway AI Inc. unveiled Gen-3 Alpha, a new AI model capable of generating ten-second videos with text prompt input.

Runway, a New York-based startup, has raised over USD 190 million from investors, including Nvidia Corp. and Google LLC. The company introduced a range of video-generating algorithms in February last year, and Gen-3 Alpha is the third installment of that series. Reports claim that Runway intends to add multiple more capable iterations of its new model to the lineup in the future.

Compared to its previous generation of AI, the business claims that Gen-3 Alpha can produce videos with greater fidelity. Runway claims that one of the reasons for the quality enhancement is that the model does a better job of representing motion. Furthermore, Gen-3 Alpha is better at ensuring that a video’s frames match one another.

Runway has carried out a second round of modifications that helped shorten the model’s video generation time. The report states that Gen-3 Alpha can produce a ten-second video in 90 seconds.

To ensure the model isn’t exploited to create bad content, Runway is creating a new set of safety measures. The business will incorporate a provenance system based on the C2PA standard as part of the endeavor. Videos made with Gen-3 Alpha will have information indicating that they were produced by AI added to them by the system.

An industry collaboration of the same name, supported by tech giants including Intel Corp. and Arm Holdings plc, developed the C2PA standard. With this technology, a multimedia file may be given metadata that shows whether it was created using AI and reveals other details like when it was developed. C2PA stores this metadata in a way that prevents efforts at tampering.

It has been claimed that Runway intends to release Gen-3 Alpha to users in the next few days. The model will power three cloud services powered by the company that can draw graphics in response to user commands and create films from text and photos. Runway plans to release more capabilities in the future that will offer even more precise control over motion, style, and structure.

As part of its long-term commercialization goals, the company will also provide customized Gen-3 Alpha versions to businesses. Runway claims that personalized models will let clients match the aesthetic of AI-generated movies closely to the specifications of their projects.

Runway has designated “general world models” as the primary focus of its long-term AI development agenda. According to the business, these systems would mimic objects instead of just drawing them using “realistic models of human behavior” and other intricate data. Runway says that because Gen-2 has “some understanding of physics and motion,” it is an early example of this technique, as is the case with Gen-3 Alpha.

Several different market participants compete with the company. OpenAI unveiled Sora, an AI-powered movie creation system, in February. More recently, Luma AI Inc.’s competing model, Dream Machine, was unveiled last week and offers comparable features.