Founded in 2018, Runway has been developing AI-powered video editing software for several years. Its tools are used by TikTokers and YouTubers, as well as major film and TV studios. The creators of The Late Show with Steven Colbert used Runway software to edit the show’s graphics; The visual effects team behind the hit movie Everything Everywhere at Once used the company’s technology to help create certain scenes.
In 2021, Runway collaborated with researchers from the University of Munich to create the first version of Stable Diffusion. Stability AI, a UK startup, then stepped in to pay for the computing costs needed to train the model on much more data. In 2022, Stability AI made Stable Diffusion mainstream, turning it from a research project into a global phenomenon.
But the two companies no longer cooperate. With Getty now launching a lawsuit against Stability AI, alleging that the company used Getty images that appear in Stable Diffusion’s training data without permission, Runway is keen to keep its distance.
Runway
Gen-1 represents a new start for Runway. This follows a small number of text-to-video models discovered late last year, including Meta’s Make-a-Video and Google’s Phenaki, both of which can create very short video clips from scratch. It’s also similar to Dreamix, Google’s generative AI revealed last week, which can create new videos from existing ones by applying preset styles. But at least according to the Runway demo, the Gen-1 looks to be a step up in video quality. Because it transforms existing footage, it can also create much longer videos than most previous models. (The company says it will release technical details about the Gen-1 on its website in the next few days.)
Unlike Meta and Google, Runway built its model with customers in mind. “This is one of the first models developed very closely with the video production community,” says Valenzuela. “She comes with years of understanding of how VFX directors and editors actually work in post-production.”
Gen-1, which runs in the cloud via Runway’s website, is being made available to a few invited users today and will launch to everyone on the waitlist in a few weeks.
Last year’s explosion in generative artificial intelligence was fueled by millions of people getting their hands on powerful creative tools for the first time and sharing what they made with them. By putting Gen-1 into the hands of creative professionals, Valenzuela hopes we’ll soon see a similar impact of generative AI on video.
“We’re very close to making full-length feature films,” he says. “We’re close to where most of the content you’ll see on the web will be created.”