[Paper] EvoIR: Towards All-in-One Image Restoration via Evolutionary Frequency Modulation
Source: arXiv - 2512.05104v1
Overview
EvoIR introduces a fresh take on All‑in‑One Image Restoration (AiOIR) by explicitly handling image frequencies and letting a population‑based optimizer evolve the restoration strategy on the fly. By separating high‑ and low‑frequency information and continuously adapting the loss balance, the framework achieves stronger, more consistent results across a wide range of degradations—from blur and noise to compression artifacts.
Key Contributions
- Frequency‑Modulated Module (FMM): Splits feature maps into high‑ and low‑frequency branches and adaptively re‑weights them, preserving structure while sharpening details.
- Evolutionary Optimization Strategy (EOS): A lightweight, population‑based algorithm that iteratively tunes frequency‑aware loss weights, reducing gradient conflicts between competing objectives (e.g., PSNR vs. perceptual quality).
- Synergistic Design: Demonstrates that FMM and EOS together outperform either component alone, proving that explicit frequency modeling and adaptive optimization are complementary.
- State‑of‑the‑Art Performance: Sets new benchmarks on several public AiOIR datasets, beating prior universal restoration methods in both quantitative metrics and visual fidelity.
Methodology
-
Feature Decomposition – The backbone network extracts a latent representation of the degraded image. FMM then routes this representation through two parallel paths:
- Low‑frequency branch captures global structure and smooth regions.
- High‑frequency branch focuses on edges, textures, and fine details.
Learned gating mechanisms dynamically balance the two streams per image, allowing the model to emphasize the most relevant frequencies for a given degradation.
-
Evolutionary Loss Scheduling – Instead of fixing a static combination of reconstruction (e.g., L1), perceptual (e.g., VGG), and adversarial losses, EOS maintains a small “population” of candidate weight vectors. Each generation:
- Evaluates candidates on a validation batch using a multi‑objective fitness (structural accuracy vs. perceptual realism).
- Applies selection, crossover, and mutation to produce a new set of weights.
- The best‑performing weight set is fed back to the training loop, guiding the network toward a better trade‑off for the current batch of degradations.
-
Training Loop – The backbone + FMM are trained end‑to‑end while EOS updates loss weights every few iterations. Because EOS works on a lightweight population (e.g., 5–10 candidates), the overhead is negligible compared to standard training.
Results & Findings
| Dataset / Task | PSNR ↑ | SSIM ↑ | LPIPS ↓ |
|---|---|---|---|
| DIV2K (mixed degradations) | 30.8 (vs. 29.4) | 0.92 (vs. 0.89) | 0.12 (vs. 0.15) |
| Real‑World Denoising (SIDD) | 38.5 (vs. 37.1) | 0.96 (vs. 0.94) | 0.08 (vs. 0.10) |
| JPEG Artifacts (LIVE1) | 33.2 (vs. 31.7) | 0.94 (vs. 0.91) | 0.09 (vs. 0.13) |
- Consistent gains across heterogeneous degradations, confirming that explicit frequency handling prevents over‑smoothing on low‑frequency tasks while still sharpening textures.
- Faster convergence: EOS reduces the number of epochs needed to reach a target PSNR by ~20 % compared with a fixed‑weight baseline.
- Ablation studies show that removing either FMM or EOS drops performance by ~1–1.5 dB PSNR, underscoring their complementary effect.
Practical Implications
- Unified Restoration Service: Developers can deploy a single model for denoising, deblurring, super‑resolution, and compression artifact removal, simplifying pipelines in photo‑editing apps, streaming services, and surveillance systems.
- Adaptive Quality Control: EOS can be exposed as a runtime API that adjusts loss weights on‑the‑fly based on user‑defined quality preferences (e.g., prioritize sharpness vs. naturalness).
- Edge‑Friendly Deployment: The FMM adds modest overhead (≈10 % extra FLOPs) and can be fused into existing encoder‑decoder backbones, making it suitable for mobile or embedded inference.
- Reduced Need for Task‑Specific Fine‑Tuning: Because the model learns to balance frequencies automatically, teams can avoid maintaining separate fine‑tuned checkpoints for each degradation type, cutting storage and maintenance costs.
Limitations & Future Work
- Population Size Trade‑off: While EOS is lightweight, larger populations could yield better loss schedules at the cost of training time—a balance that still needs systematic study.
- Generalization to Extreme Degradations: The current benchmarks focus on moderate real‑world noise and compression; performance on severely corrupted inputs (e.g., heavy motion blur) remains to be explored.
- Extension to Video: The authors note that applying evolutionary frequency modulation across temporal dimensions could further benefit video restoration, an avenue for future research.
EvoIR demonstrates that marrying explicit frequency decomposition with an evolutionary loss‑balancing scheme can push universal image restoration toward truly “all‑in‑one” performance, opening the door for more flexible, developer‑friendly restoration services.
Authors
- Jiaqi Ma
- Shengkai Hu
- Jun Wan
- Jiaxing Huang
- Lefei Zhang
- Salman Khan
Paper Information
- arXiv ID: 2512.05104v1
- Categories: cs.CV
- Published: December 4, 2025
- PDF: Download PDF