oob1SpAbB6OPlKCEgX5tAtdHAi9pTyc9mVF41 Q9U2U
| | Hi everyone. I’m Zeev Farbman, Co-founder & CEO of Lightricks. I’ve spent the last few years working closely with our team on LTX-2, a production-ready audio–video foundation model. This week, we did a full open-source release of LTX-2, including weights, code, a trainer, benchmarks, LoRAs, and documentation. Open releases of multimodal models are rare, and when they do happen, they’re often hard to run or hard to reproduce. We built LTX-2 to be something you can actually use: it runs locally on consumer GPUs and powers real products at Lightricks. I’m here to answer questions about:
Ask me anything! Verification: submitted by /u/ltx_model |
Github | CivitAI Point this workflow at a directory of clips and it will automatically…
Existing feed-forward 3D Gaussian Splatting methods predict pixel-aligned primitives, leading to a quadratic growth in…
We tested Garmin’s GPS-enabled fitness trackers and found the perfect picks for casual hikers, backcountry…
New research confirms it: the creativity of artificial intelligence (AI) is a myth. Although current…
https://arstechnica.com/ai/2026/03/google-says-new-turboquant-compression-can-lower-ai-memory-usage-without-sacrificing-quality/ submitted by /u/pheonis2 [link] [comments]
Creating an AI agent for tasks like analyzing and processing documents autonomously used to require…