Categories: FAANG

MM-Ego: Towards Building Egocentric Multimodal LLMs

This research aims to comprehensively explore building a multimodal foundation model for egocentric video understanding. To achieve this goal, we work on three fronts. First, as there is a lack of QA data for egocentric video understanding, we automatically generate 7M high-quality QA samples for egocentric videos ranging from 30 seconds to one hour long in Ego4D based on human-annotated data. This is one of the largest egocentric QA datasets. Second, we contribute a challenging egocentric QA benchmark with 629 videos and 7,026 questions to evaluate the models’ ability in recognizing and…
AI Generated Robotic Content

Recent Posts

Never forget…

submitted by /u/ShadowBoxingBabies [link] [comments]

3 hours ago

A Reinforcement Learning Based Universal Sequence Design for Polar Codes

To advance Polar code design for 6G applications, we develop a reinforcement learning-based universal sequence…

3 hours ago

Democratizing business intelligence: BGL’s journey with Claude Agent SDK and Amazon Bedrock AgentCore

This post is cowritten with James Luo from BGL. Data analysis is emerging as a…

3 hours ago

An ‘Intimacy Crisis’ Is Driving the Dating Divide

In his book The Intimate Animal, sex and relationships researcher Justin Garcia says people have…

4 hours ago

New fire just dropped: ComfyUI-CacheDiT ⚡

ComfyUI-CacheDiT brings 1.4-1.6x speedup to DiT (Diffusion Transformer) models through intelligent residual caching, with zero…

1 day ago