Back to Blog
·Rosie·1 min read

First AI Models for Video Editing – GEN-1 (Runway)

As I recently predicted, models for video editing will soon start to emerge. One of the first steps is Gen-1. Runway AI released Gen-1 on February 6,…

First AI Models for Video Editing – GEN-1 (Runway)

As I recently predicted, models for video editing will soon start to emerge. One of the first steps is Gen-1.

First AI Models for Video Editing – GEN-1 (Runway)

On February 6, Runway AI released Gen-1, a new neural network designed for generating video based on text prompts. Think of it as Dall-E, Midjourney, or Stable Diffusion, but instead of generating images, it generates videos directly. This is a diffusion model for generation based on visual or textual descriptions. Thus, the Gen-1 model does not generate video solely from a text description; it requires a sample video as input, which it then modifies according to your text prompt or image.

The paper discusses four main features:

  1. Stylisation – the model modifies the video to match the style of your image.

  2. Storyboarding – instead of placeholder objects, it generates what you need in the video.

  3. Masking – it identifies objects in the video and modifies them according to your wishes.

  4. Rendering – it enhances the basic 3D model with new textures, lighting, and shadows.

Unfortunately, the model is not yet available for testing, but you can sign up for the waiting list or view sample videos.

Sources: https://research.runwayml.com/gen1?utm_source=RunwaySocial&utm_medium=YouTube&utm_campaign=Gen1Launch&fbclid=IwAR3EcBFdqi1tPzu4VGFXk5UNzDrPxGu7SleBjXpf4a0EQWOuhEn_24n7wTw

Paper: https://arxiv.org/abs/2302.03011?fbclid=IwAR1-5TosyxC5SRA9-yX-snZWF6efYAia9y_zMzwZFeTvQukqOzrj7IoiSDw

Waiting list: https://docs.google.com/forms/d/e/1FAIpQLSfU0O_i1dym30hEI33teAvCRQ1i8UrGgXd4BPrvBWaOnDgs9g/viewform?fbclid=IwAR24yZJ23XaGEyYwCo5ay_mCcSgAdYZ28hB9VwHEozGKv4IrUVDt6-xX_Ik

#Gen1 #video #Runway #diffusion

Původní zdroj: wordpress

Související články