iDLG: Improved Deep Leakage from Gradients
Source: Dev.to
Overview
When people train models together they share small signals called gradients, thought safe and private. New work shows those signals can actually reveal the real training images and their answers, so your private info may not be safe.
Method
Researchers found a clearer way to pull out the true labels from those signals, and then rebuild the original data more reliably than before. The trick is simple and works on many common training setups; it does not need tricks or lots of guessing. This new method called iDLG makes the problem visible: the labels leak first, then the images.
Implications
Sharing gradients can harm privacy unless practices are changed. The idea is easy to explain, but it matters a lot for applications that train together on phones or in groups. If you use shared learning, you should know this risk, and developers need to add protections so private info stays private. The fix is possible, but not automatic, and action is needed now.