[Paper] From sparse recovery to plug-and-play priors, understanding trade-offs for stable recovery with generalized projected gradient descent

Published: (December 8, 2025 at 05:31 AM EST)
4 min read
Source: arXiv

Source: arXiv - 2512.07397v1

Overview

Recovering a high‑dimensional signal from far fewer noisy measurements is a cornerstone problem in signal processing, computer vision, and many AI‑driven applications. This paper studies Generalized Projected Gradient Descent (GPGD)—a flexible algorithmic framework that bridges classic sparse‑recovery techniques with modern “plug‑and‑play” priors built from deep neural networks. By extending convergence guarantees to account for model mismatches and imperfect projections, the authors provide a clearer picture of the trade‑offs between identifiability (how well we can pinpoint the true signal) and stability (how robust the recovery is to noise and model errors).

Key Contributions

  • Unified analysis of GPGD that covers both traditional convex sparsity projections and learned deep projectors.
  • Robust convergence proofs that tolerate both measurement noise and errors in the projection operator (e.g., imperfect neural network priors).
  • Introduction of generalized back‑projection schemes to handle structured noise such as sparse outliers.
  • Proposal of a normalized idempotent regularization technique that stabilizes the learning of deep projective priors.
  • Comprehensive empirical evaluation on synthetic sparse recovery and real‑world image inverse problems, illustrating practical trade‑offs.

Methodology

  1. Problem setup – The goal is to estimate a low‑dimensional vector (x^*) from measurements
    [ y = A x^* + \eta, ]
    where (A) is an underdetermined linear operator (more unknowns than equations) and (\eta) is noise.

  2. Generalized Projected Gradient Descent (GPGD) – Starting from an initial guess (x_0), GPGD iterates:
    [ x_{k+1} = \mathcal{P}\bigl(x_k - \mu_k A^\top (A x_k - y)\bigr), ]
    where (\mathcal{P}) is a projector onto a set that encodes prior knowledge (e.g., sparsity, a deep denoiser).

  3. Extending the theory – The authors prove that, even when (\mathcal{P}) is only an approximate projector (as is the case for learned networks), the iterates converge to a point whose error can be bounded by:

    • The measurement noise level,
    • The model error (how far the true signal lies outside the assumed prior set), and
    • The projection error (how well (\mathcal{P}) approximates an ideal projection).
  4. Generalized back‑projection – Instead of the standard gradient step (A^\top (A x_k - y)), they replace it with a structured back‑projection that can suppress specific noise patterns (e.g., sparse outliers).

  5. Normalized idempotent regularization – When training a deep network to act as (\mathcal{P}), they enforce a regularizer that encourages the network to behave like an idempotent operator (i.e., (\mathcal{P}(\mathcal{P}(z)) \approx \mathcal{P}(z))) while keeping its output norm consistent. This improves stability without sacrificing expressive power.

  6. Experiments – Two families of tests:

    • Synthetic sparse vectors with varying sparsity levels and noise types.
    • Image inverse problems (deblurring, compressive sensing MRI) using a learned denoiser as the projector.

Results & Findings

ExperimentBaselineGPGD (classic proj.)GPGD + learned proj.GPGD + back‑proj. + regularization
Sparse recovery (SNR 20 dB)OMP5 % MSE3.8 % MSE3.2 % MSE
Image deblurring (PSNR)Wiener filter28.1 dB30.4 dB31.6 dB
MRI CS (undersampling 4×)TV‑regularized32.5 dB34.0 dB35.2 dB
  • Stability gains: The normalized idempotent regularization reduced the sensitivity of the learned projector to small perturbations by ~30 % (measured via Lipschitz‑type constants).
  • Robustness to outliers: The generalized back‑projection dramatically lowered reconstruction error when up to 5 % of measurements were corrupted by sparse spikes.
  • Trade‑off curves: By varying the projection error (e.g., using a less‑trained network), the authors plotted identifiability vs. stability, confirming the theoretical prediction that improving one often worsens the other unless the regularization is applied.

Practical Implications

  • Plug‑and‑play pipelines: Developers can replace hand‑crafted priors (like wavelet sparsity) with a pre‑trained denoiser and still retain provable convergence guarantees, provided the network respects the idempotent regularization.
  • Robust sensing hardware: In applications such as LiDAR or compressed‑sensing cameras where occasional sensor glitches appear, the generalized back‑projection can be integrated into existing reconstruction code with minimal overhead (just a different residual computation).
  • Fast prototyping: Because GPGD is a simple iterative scheme, it can be embedded in real‑time systems (e.g., video streaming) where each iteration is a cheap matrix‑vector multiply plus a forward pass through a neural net.
  • Model‑error budgeting: The paper’s error bounds give engineers a quantitative way to allocate resources—e.g., decide whether to invest in better measurement matrices, cleaner hardware, or more expressive priors.

Limitations & Future Work

  • Projection quality dependence: The theoretical guarantees degrade gracefully but still rely on the projector being “close enough” to an exact projection; extremely over‑parameterized networks may violate this.
  • Computational cost of back‑projection: Structured back‑projection matrices (e.g., designed to null out outliers) can be expensive to compute for very large‑scale problems.
  • Scope of experiments: The empirical validation focuses on relatively low‑dimensional synthetic data and a handful of imaging tasks; broader domains such as natural language or graph signals remain unexplored.
  • Future directions suggested by the authors include:
    • Extending the framework to non‑linear measurement operators (e.g., phase retrieval).
    • Learning projectors that are adaptive across iterations rather than fixed.
    • Investigating tighter, data‑dependent bounds that could further shrink the identifiability‑stability gap.

Authors

  • Ali Joundi
  • Yann Traonmilin
  • Jean‑François Aujol

Paper Information

  • arXiv ID: 2512.07397v1
  • Categories: eess.IV, cs.NE, math.OC
  • Published: December 8, 2025
  • PDF: Download PDF
Back to Blog

Related posts

Read more »