[Paper] Unrolled Networks are Conditional Probability Flows in MRI Reconstruction

Published: (December 2, 2025 at 01:48 PM EST)
4 min read
Source: arXiv

Source: arXiv - 2512.03020v1

Overview

This paper shows that the popular “unrolled” deep‑learning networks used for accelerated MRI reconstruction are mathematically equivalent to discrete steps of a conditional probability flow ODE—the same kind of continuous dynamics that underlie diffusion models. By making this connection explicit, the authors devise a new training scheme (FLAT) that forces the unrolled network to follow the stable ODE trajectory, yielding faster, more reliable reconstructions.

Key Contributions

  • Theoretical bridge: Prove that unrolled MRI reconstruction networks are exact discretizations of conditional probability flow ordinary differential equations (ODEs).
  • Closed‑form parameter mapping: Derive explicit formulas that map ODE coefficients to the learnable weights of each unrolled layer.
  • Flow‑Aligned Training (FLAT): Introduce a training objective that aligns intermediate network outputs with the ideal ODE solution, improving stability without extra inference cost.
  • Empirical validation: Demonstrate on three public MRI datasets that FLAT matches or exceeds diffusion‑model quality while using up to 3× fewer iterations and showing far less divergence than vanilla unrolled nets.

Methodology

  1. Problem setup – MRI acquisition samples the Fourier domain (k‑space). Undersampling speeds up scans but creates aliasing artifacts in the inverse Fourier image.
  2. Unrolled networks – Traditional approaches unroll an iterative optimization algorithm (e.g., gradient descent) into a fixed‑depth neural net, learning a set of parameters for each iteration.
  3. Probability flow ODE view – The authors start from the conditional diffusion formulation, where the data distribution evolves under a stochastic differential equation (SDE). By removing the stochastic term, they obtain a deterministic probability flow ODE that preserves the same marginal distributions.
  4. Discrete‑continuous equivalence – They show that each unrolled layer corresponds to a single Euler step of this ODE, with the layer’s weights directly representing the ODE’s drift term. This yields a closed‑form relationship between the ODE’s continuous coefficients and the network’s learnable parameters.
  5. FLAT training – Instead of learning the parameters freely, FLAT constrains them to satisfy the ODE discretization and adds a loss that penalizes deviation of intermediate reconstructions from the analytically computed ODE trajectory (obtained via a high‑precision numerical solver).
  6. Implementation details – The model uses standard convolutional blocks for the learned regularizer, a data‑consistency projection for each step, and is trained end‑to‑end with a combination of L2 image loss and the FLAT alignment loss.

Results & Findings

DatasetMetric (PSNR ↑)FLAT vs. Baseline UnrolledFLAT vs. Diffusion Model
FastMRI Knee38.7+2.1 dB (more stable across runs)Comparable (±0.2 dB)
Brain (Calgary)41.2+1.8 dB+0.5 dB (with 3× fewer steps)
Cardiac (MIDAS)36.5+2.4 dBSimilar quality, 3× speedup
  • Stability: Across 10 random seeds, FLAT’s variance in PSNR dropped from ~1.2 dB (plain unrolled) to <0.3 dB.
  • Speed: Matching diffusion‑model quality required ~30 inference steps; FLAT reached the same PSNR in ~10 steps.
  • Visual quality: Edge preservation and artifact suppression were noticeably better than the baseline, especially in high‑frequency regions (e.g., cartilage edges).

Practical Implications

  • Faster clinical pipelines: Radiology departments can integrate FLAT‑trained models into the scanner’s reconstruction chain, cutting post‑processing time by up to 70 % compared with diffusion‑based generators.
  • Reduced hardware demand: Because FLAT needs far fewer iterations, it fits comfortably on existing GPU‑accelerated reconstruction servers without requiring the massive memory budgets of diffusion samplers.
  • More predictable deployment: The ODE‑based grounding eliminates the “runaway” behavior sometimes seen in vanilla unrolled nets, making it easier to certify models for regulatory approval.
  • Transferability: The theoretical mapping is agnostic to the specific regularizer architecture, so developers can plug in their favorite CNN or transformer block while still benefiting from FLAT’s stability guarantees.

Limitations & Future Work

  • ODE discretization error: The current implementation uses a simple Euler step; higher‑order schemes could further improve fidelity but would complicate the parameter mapping.
  • Training overhead: Computing the reference ODE trajectory for the alignment loss adds a modest cost during training (≈15 % longer epochs).
  • Generality beyond MRI: While the authors argue the theory holds for any linear inverse problem, empirical validation on CT, PET, or non‑medical imaging tasks is still pending.
  • Adaptive step sizing: Future work could explore learned step sizes or adaptive solvers to balance speed and accuracy dynamically.

Bottom line: By revealing that unrolled MRI reconstruction networks are just discretized probability‑flow ODEs, the authors give us a principled way to make these models faster and more reliable. For developers building next‑generation medical imaging pipelines, FLAT offers a practical, theoretically sound upgrade that bridges the gap between classic optimization‑based reconstructions and the expressive power of modern deep generative models.

Authors

  • Kehan Qi
  • Saumya Gupta
  • Qingqiao Hu
  • Weimin Lyu
  • Chao Chen

Paper Information

  • arXiv ID: 2512.03020v1
  • Categories: cs.CV
  • Published: December 2, 2025
  • PDF: Download PDF
Back to Blog

Related posts

Read more »