Categories: FAANG

Evaluating the IWSLT2023 Speech Translation Tasks: Human Annotations, Automatic Metrics, and Segmentation

Human evaluation is a critical component in machine translation system development and has received much attention in text translation research. However, little prior work exists on the topic of human evaluation for speech translation, which adds additional challenges such as noisy data and segmentation mismatches. We take first steps to fill this gap by conducting a comprehensive human evaluation of the results of several shared tasks from the last International Workshop on Spoken Language Translation (IWSLT 2023). We propose an effective evaluation strategy based on automatic resegmentation…
AI Generated Robotic Content

Recent Posts

How are these hyper-realistic celebrity mashup photos created?

What models or workflows are people using to generate these? submitted by /u/danikcara [link] [comments]

14 hours ago

Beyond GridSearchCV: Advanced Hyperparameter Tuning Strategies for Scikit-learn Models

Ever felt like trying to find a needle in a haystack? That’s part of the…

14 hours ago

Distillation Scaling Laws

We propose a distillation scaling law that estimates distilled model performance based on a compute…

14 hours ago

Hospital cyber attacks cost $600K/hour. Here’s how AI is changing the math

How Alberta Health Services is using advanced AI to bolster its defenses as attackers increasingly…

15 hours ago

‘Wall-E With a Gun’: Midjourney Generates Videos of Disney Characters Amid Massive Copyright Lawsuit

A week after Disney and Universal filed a landmark lawsuit against Midjourney, the generative AI…

15 hours ago

AI at light speed: How glass fibers could replace silicon brains

Imagine supercomputers that think with light instead of electricity. That s the breakthrough two European…

15 hours ago