iDLG: Improved Deep Leakage from Gradients

Published: (January 1, 2026 at 06:40 AM EST)
1 min read
Source: Dev.to

Source: Dev.to

Overview

When people train models together they share small signals called gradients, thought safe and private. New work shows those signals can actually reveal the real training images and their answers, so your private info may not be safe.

Method

Researchers found a clearer way to pull out the true labels from those signals, and then rebuild the original data more reliably than before. The trick is simple and works on many common training setups; it does not need tricks or lots of guessing. This new method called iDLG makes the problem visible: the labels leak first, then the images.

Implications

Sharing gradients can harm privacy unless practices are changed. The idea is easy to explain, but it matters a lot for applications that train together on phones or in groups. If you use shared learning, you should know this risk, and developers need to add protections so private info stays private. The fix is possible, but not automatic, and action is needed now.

Further Reading

iDLG: Improved Deep Leakage from Gradients

Back to Blog

Related posts

Read more »

The RGB LED Sidequest 💡

markdown !Jennifer Davishttps://media2.dev.to/dynamic/image/width=50,height=50,fit=cover,gravity=auto,format=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%...

Mendex: Why I Build

Introduction Hello everyone. Today I want to share who I am, what I'm building, and why. Early Career and Burnout I started my career as a developer 17 years a...