Categories: FAANG

SlowFast-LLaVA-1.5: A Family of Token-Efficient Video Large Language Models for Long-Form Video Understanding

We introduce SlowFast-LLaVA-1.5 (abbreviated as SF-LLaVA-1.5), a family of video large language models (LLMs) offering a token-efficient solution for long-form video understanding. We incorporate the two-stream SlowFast mechanism into a streamlined training pipeline, and perform joint video-image training on a carefully curated data mixture of only publicly available datasets. Our primary focus is on highly efficient model scales (1B and 3B), demonstrating that even relatively small Video LLMs can achieve state-of-the-art performance on video understanding, meeting the demand for…
AI Generated Robotic Content

Recent Posts

I love Qwen

It is far more likely that a woman underwater is wearing at least a bikini…

19 hours ago

100% Unemployment is Inevitable*

TL;DR AI is already raising unemployment in knowledge industries, and if AI continues progressing toward…

19 hours ago

Sample and Map from a Single Convex Potential: Generation using Conjugate Moment Measures

The canonical approach in generative modeling is to split model fitting into two blocks: define…

19 hours ago

Streamline AI operations with the Multi-Provider Generative AI Gateway reference architecture

As organizations increasingly adopt AI capabilities across their applications, the need for centralized management, security,…

19 hours ago

BigQuery AI: The convergence of data and AI is here

From uncovering new insights in multimodal data to personalizing customer experiences, AI is emerging as…

19 hours ago

OpenAI is ending API access to fan-favorite GPT-4o model in February 2026

OpenAI has sent out emails notifying API customers that its chatgpt-4o-latest model will be retired…

20 hours ago