Privacy of Noisy Stochastic Gradient Descent: More Iterations without More Privacy Loss
A central issue in machine learning is how to train models on sensitive user data. Industry has widely adopted a simple algorithm: Stochastic Gradient Descent with noise (a.k.a. Stochastic Gradient Langevin Dynamics). However, foundational theoretical questions about this algorithm’s privacy loss remain open — even in the seemingly simple setting of smooth convex losses over …
Read more “Privacy of Noisy Stochastic Gradient Descent: More Iterations without More Privacy Loss”