Categories: FAANG

Video generation models as world simulators

We explore large-scale training of generative models on video data. Specifically, we train text-conditional diffusion models jointly on videos and images of variable durations, resolutions and aspect ratios. We leverage a transformer architecture that operates on spacetime patches of video and image latent codes. Our largest model, Sora, is capable of generating a minute of high fidelity video. Our results suggest that scaling video generation models is a promising path towards building general purpose simulators of the physical world.
AI Generated Robotic Content

Recent Posts

Having Fun with Ai

submitted by /u/Artefact_Design [link] [comments]

11 hours ago

Datasets for Training a Language Model

A good language model should learn correct language usage, free of biases and errors.

11 hours ago

Everyone can now fly their own drone.

TL;DR Using Google’s new Veo 3.1 video model, we created a breathtaking 1 minute 40…

11 hours ago

CAR-Flow: Condition-Aware Reparameterization Aligns Source and Target for Better Flow Matching

Conditional generative modeling aims to learn a conditional data distribution from samples containing data-condition pairs.…

11 hours ago

Announcing BigQuery-managed AI functions for better SQL

For decades, SQL has been the universal language for data analysis, offering access to analytics…

11 hours ago