[Paper] Robust Differential Evolution via Nonlinear Population Size Reduction and Adaptive Restart: The ARRDE Algorithm
Source: arXiv - 2511.18429v1
Overview
The paper introduces Adaptive Restart‑Refine Differential Evolution (ARRDE), a new variant of the popular Differential Evolution (DE) optimizer. By combining a nonlinear population‑size reduction schedule with an adaptive restart‑refine mechanism, ARRDE aims to stay performant across wildly different benchmark suites—something many state‑of‑the‑art DE algorithms struggle with. The authors back their claim with an unprecedented evaluation on five CEC benchmark collections spanning a decade of challenge sets.
Key Contributions
- ARRDE algorithm: Extends LSHADE with jSO‑style control parameters, adds a nonlinear population‑size reduction, and an adaptive restart‑refine strategy that re‑initialises the search when stagnation is detected.
- Cross‑suite robustness study: First work to benchmark a DE variant on five CEC suites (2011, 2017, 2019, 2020, 2022) covering a wide range of dimensionalities, landscape complexities, and evaluation budgets.
- Bounded‑accuracy scoring metric: Proposes a relative‑error‑based metric that normalises performance across suites with different objective‑function scales, enabling fair head‑to‑head comparisons.
- Comprehensive empirical evidence: Shows ARRDE consistently ranks first (both rank‑based and accuracy‑based) against strong baselines such as jSO, LSHADE‑cnEpSin, j2020, and NLSHADE‑RSP.
Methodology
- Base algorithm – ARRDE starts from LSHADE, a DE variant that already adapts its control parameters (mutation factor F and crossover rate CR) and shrinks its population linearly.
- jSO mechanisms – It inherits jSO’s successful parameter‑control heuristics (e.g., success‑history based adaptation) to keep the search diverse.
- Nonlinear population‑size reduction – Instead of a straight‑line decay, the population size follows a concave curve: it stays large early on (promoting exploration) and drops sharply later (intensifying exploitation). The schedule is defined by a simple power‑law function, requiring only one extra hyper‑parameter.
- Adaptive restart‑refine – During the run, ARRDE monitors improvement. If the best fitness hasn’t improved beyond a tiny tolerance for a preset number of generations, the algorithm restarts with a refreshed population drawn from the current elite set, then refines by temporarily increasing the population size again. This dynamic “reset‑and‑focus” loop helps escape local optima without discarding useful information.
- Evaluation protocol – For each benchmark suite the authors run 51 independent trials per problem, respecting the suite‑specific maximum function‑evaluation budget (Nmax). Performance is reported using both classic ranking (average rank across problems) and the newly introduced bounded‑accuracy score.
Results & Findings
- Across‑suite dominance: ARRDE achieved the best average rank on all five benchmark suites, beating algorithms that were previously tuned for a single suite.
- Statistical significance: Pairwise Wilcoxon signed‑rank tests confirm that ARRDE’s improvements over the closest competitors are statistically significant (p < 0.01) on most problem groups.
- Scalability: The algorithm maintains its edge when the dimensionality jumps from 10 to 30 variables, indicating the nonlinear population schedule scales well.
- Robustness to budget: Even when Nmax is reduced to half of the official limit, ARRDE’s performance degrades gracefully, unlike many baselines that collapse early.
Practical Implications
- Plug‑and‑play optimizer: Developers can adopt ARRDE as a drop‑in replacement for existing DE libraries (e.g., DEAP, PyGMO) without needing to retune hyper‑parameters for each new problem class.
- Industrial design & simulation: Tasks such as aerodynamic shape optimisation, hyper‑parameter tuning for machine‑learning pipelines, or calibration of physical models often involve bound‑constrained, noisy, or multimodal landscapes. ARRDE’s adaptive restart‑refine loop makes it resilient to the “unknown‑terrain” nature of these problems.
- Resource‑aware optimisation: The nonlinear population reduction means fewer individuals are evaluated later in the run, saving computational budget—a boon for cloud‑based or embedded optimisation where each function evaluation may be expensive.
- Benchmark‑agnostic development: The bounded‑accuracy scoring metric can be reused by practitioners to compare solvers on proprietary test suites where objective scales differ, fostering more transparent performance reporting.
Limitations & Future Work
- Parameter sensitivity: Although ARRDE reduces the need for manual tuning, the power‑law exponent governing population reduction and the stagnation‑detection thresholds still require modest calibration for extreme problem domains (e.g., highly noisy functions).
- Extension to constraints: The current study focuses on bound‑constrained problems; handling general equality/inequality constraints would broaden applicability.
- Hybridisation: Combining ARRDE with local search methods (e.g., gradient‑based refiners) could further accelerate convergence on smooth sub‑problems.
- Real‑world case studies: Future work could validate ARRDE on industrial case studies (e.g., circuit design, logistics) to demonstrate end‑to‑end gains beyond synthetic benchmarks.
Bottom line: ARRDE offers a robust, largely self‑configuring DE variant that consistently outperforms specialized competitors across a wide spectrum of benchmark challenges—making it a compelling tool for developers tackling diverse optimisation problems in the wild.
Authors
- Khoirul Faiq Muzakka
- Ahsani Hafizhu Shali
- Haris Suhendar
- Sören Möller
- Martin Finsterbusch
Paper Information
- arXiv ID: 2511.18429v1
- Categories: cs.NE, math.OC
- Published: November 23, 2025
- PDF: Download PDF