I mainly used I2V. Used several models for the images.
Some thoughts after working on this – The acting i got from ltx blew my mind. No need for super long prompts, i just describe the overall action and put dialogue inside quotation marks.
I used the fast model mainly – with a lot of motion you sometimes get smudges, but overall worked pretty good. Some of the shots in the final video were one-shot results. i think the most difficult one was the final shot, because the guy kept entering the frame.
In general models are not good with post processing like film grain, so i’ve added some glitches and grain in post, but no color correction. The model is not super good with text, so try and avoid showing any.
You can generate 20 seconds continuous videos which is game changer for film-making (currently 20 sec available only on the fast version). Without 20 sec, i probably couldn’t get the results i wanted to make this.
Audio is pretty good, though sometimes during long silent parts it glitches.
Overall, i had tons of fun working on this. I think that this is one of the first times that i could work on something bigger than a trailer and maintain impressive realism. I can see someone who is not ‘trained’ on spotting ai thinking this is a real live-action short. Fun times ahead.
submitted by /u/theNivda
[link] [comments]
So in the past few weeks I have been dedicating long hours into finding optimal…
You've probably written a decorator or two in your Python career.
This paper was accepted at the Workshop on Navigating and Addressing Data Problems for Foundation…
Text-to-SQL generation remains a persistent challenge in enterprise AI applications, particularly when working with custom…
Editor’s note: Today we hear from Perry Nightingale, SVP of Creative AI at WPP about…
A model of the cyclic universe suggests that dark matter could be a population of…