Runway’s ‘Gen-2’ generative AI can create trippy videos from text prompts

Forward-looking: When generative AI imaging caught on, it was only a matter of time before someone started developing tools to create videos using machine learning. Runway looks to offer one of the first successful implementations of the technology, unveiling a range of tools it says can help creators make videos, including text prompts.

Content creation company Runway recently revealed the next step in its toolset for AI-generated videos. The developer’s suite can generate short animated clips from text prompts, static images, a combination of the two, or other elements. One example uses text input to make a clip of New York City seen through an apartment window. Another animates a photo with entirely different lighting.

Another tool combines an image and video, applying the visual aesthetic of one to the other. Runway’s website demonstrates a theoretical application where the tech transforms rudimentary real-worlds and 3D mockups into animated renders.

The new tech, dubbed Gen-2, is the second generation of the company’s generative AI tools, which the whitepaper details. The first pass synthesizes videos with diffusion models and pre-existing structures to combine the visual styles of videos with unrelated images.

The videos don’t look all that realistic, and the AI can’t create long videos from scratch yet, but the clips could make for creative art shorts. Along with the company’s other tools, the AI model could become a valuable part of a larger workflow. The company hasn’t revealed when its text-to-video and other Gen-2 tools will be publicly available. However, the announcement video (masthead) notes it could be soon.

Gen-2 also offers over two dozen other tools that use AI more granularly. Along with generating images, Runway’s suite can use text prompts to alter existing pictures, make textures for 3D objects, and colorize black-and-white photos. The company’s video editing tools can add or subtract scene elements, interpolate frames, implement slow motion, censor faces, generate transcripts and subtitles, extract depth information, track motion, and edit audio. Background-related functions let users remove, replace, or blur a video’s background.

Source link

Prevention programs work — until they have their funding pulled — ScienceDaily

Study highlights complicated relationship between AI and law enforcement — ScienceDaily