The biggest change: we integrated model layer streaming across all local inference pipelines, cutting peak VRAM usage enough to run on 16 GB VRAM machines. This has been one of the most requested changes since launch, and it’s live now.
What else is in 1.0.3:
- Video Editor performance: Smooth playback and responsiveness even in heavy projects (64+ assets). Fixes for audio playback stability and clip transition rendering.
- Video Editor architecture: Refactored core systems with reliable undo/redo and project persistence.
- Faster model downloads.
- Contributor tooling: Integrated coding agent skills (Cursor, Claude Code, Codex) aligned with the new architecture. If you’ve been thinking about contributing, the barrier just got lower.
The VRAM reduction is the one we’re most excited about. The higher VRAM requirement locked out a lot of capable desktop hardware. If your GPU kept you on the sideline, try it now and let us know how it works for you on GitHub.
Already using Desktop? The update downloads automatically.
New here? Download
submitted by /u/ltx_model
[link] [comments]