Categories: FAANG

Privacy of Noisy Stochastic Gradient Descent: More Iterations without More Privacy Loss

A central issue in machine learning is how to train models on sensitive user data. Industry has widely adopted a simple algorithm: Stochastic Gradient Descent with noise (a.k.a. Stochastic Gradient Langevin Dynamics). However, foundational theoretical questions about this algorithm’s privacy loss remain open — even in the seemingly simple setting of smooth convex losses over a bounded domain. Our main result resolves these questions: for a large range of parameters, we characterize the differential privacy up to a constant factor. This result reveals that all previous analyses for this…
AI Generated Robotic Content

Recent Posts

Qwen + Wan 2.2 Low Noise T2I (2K GGUF Workflow Included)

Workflow : https://pastebin.com/f32CAsS7 Hardware : RTX 3090 24GB Models : Qwen Q4 GGUF + Wan…

7 hours ago

7 Pandas Tricks for Time-Series Feature Engineering

Feature engineering is one of the most important steps when it comes to building effective…

7 hours ago

How AI is helping advance the science of bioacoustics to save endangered species

Our new Perch model helps conservationists analyze audio faster to protect endangered species, from Hawaiian…

7 hours ago

Adaptive Knowledge Distillation for Device-Directed Speech Detection

Device-directed speech detection (DDSD) is a binary classification task that separates the user’s queries to…

7 hours ago

The DIVA logistics agent, powered by Amazon Bedrock

DTDC is India’s leading integrated express logistics provider, operating the largest network of customer access…

7 hours ago

ChatGPT users dismayed as OpenAI pulls popular models GPT-4o, o3 and more — enterprise API remains (for now)

OpenAI has announced GPT-5 will replace all models on ChatGPT. Many users are mourning the…

8 hours ago