Enabling Differentially Private Federated Learning for Speech Recognition: Benchmarks, Adaptive Optimizers, and Gradient Clipping

While federated learning (FL) and differential privacy (DP) have been extensively studied, their application to automatic speech recognition (ASR) remains largely unexplored due to the challenges in training large transformer models. Specifically, large models further exacerbate issues in FL as they are particularly susceptible to gradient heterogeneity across layers, unlike the relatively uniform gradient behavior …

1q871tK1C7Y8VSAXVOmu4Ig

100X Faster: How We Supercharged Netflix Maestro’s Workflow Engine

By Jun He, Yingyi Zhang, Ely Spears TL;DR We recently upgraded the Maestro engine to go beyond scalability and improved its performance by 100X! The overall overhead is reduced from seconds to milliseconds. We have updated the Maestro open source project with this improvement! Please visit the Maestro GitHub repository to get started. If you find …

Announcing Claude Sonnet 4.5 on Vertex AI

Today, we’re announcing the general availability of Claude Sonnet 4.5, Anthropic’s most intelligent model and its best-performing model for complex agents, coding, and computer use, on Vertex AI. Claude Sonnet 4.5 is built to work independently for hours, maintaining clarity while orchestrating tools and coordinating multiple agents to solve complex problems. It’s designed to excel …

Quantum chips just proved they’re ready for the real world

Diraq has shown that its silicon-based quantum chips can maintain world-class accuracy even when mass-produced in semiconductor foundries. Achieving over 99% fidelity in two-qubit operations, the breakthrough clears a major hurdle toward utility-scale quantum computing. Silicon’s compatibility with existing chipmaking processes means building powerful quantum processors could become both cost-effective and scalable.

This was a satisfying peel

My GPU journey since I started for playing with AI stuff on my old gaming PC. RX5700XT -> 4070 -> 4090 -> 5090 -> this It’s gone from 8 minutes to generate a 512*512 image to <8 minutes to generate a short 1080p video. submitted by /u/legarth [link] [comments]