Multimodal Model For Generative Creation | LTX Model

1 min read Original article ↗

Build Anything

Multimodal video generation infrastructure for real workflows.

We believe creative systems should be reliable enough for real production, flexible enough to adapt to different workflows, and open enough to improve through use. This is why we’re building an open, production-grade system for cinematic, synchronized audio–video creation.

Open Source by design

Hear from Zeev, co founder & CEO of Lightricks, on why we’re open-sourcing LTX-2, and what it means for the future of AI.
Learn More →

Model weights, code, and core tooling are openly available for inception, extension, and reuse.

Excellence through collaboration

Improvement driven through real-world usage, iterative experimentation, and community driven collaboration.

Build, Create, and Scale with LTX

Production-grade video generation models designed to hold up under real workloads. Built for long sequences, precise motion, and high-fidelity output  from fast iteration to final-quality renders. Learn more →

Access the full power of LTX-2 through an API built for production

Success, engineered together

"For professional studios, this level of control is not optional.
Training and steering video models like LTX is the most viable way to align AI with real production needs, where predictability, ownership, and creative intent matter as much as visual quality"

Mohamed Oumoumad

CTO, Gear Productions