[Paper] All Constant Mutation Rates for the $(1+1)$ Evolutionary Algorithm

Published: (February 21, 2026 at 07:30 PM EST)
5 min read
Source: arXiv

Source: arXiv - 2602.18989v1

Overview

The paper proves a striking property of the classic (1+1) Evolutionary Algorithm (EA): for any mutation probability you can pick in the open interval (0, 1), there exists a contrived fitness landscape on which that exact probability (up to an arbitrarily small tolerance) is the best possible choice. In other words, the set of optimal mutation rates is dense in the whole range from 0 to 1. The result is demonstrated via a newly designed benchmark function called DistantSteppingStones.

Key Contributions

  • Density theorem: Shows that for every (p\in(0,1)) and any (\varepsilon>0) there is a fitness function whose optimal mutation rate lies in ((p-\varepsilon, p+\varepsilon)).
  • Construction of DistantSteppingStones: A synthetic problem composed of huge plateaus separated by deep valleys, tailored to force the EA to favor a specific mutation rate.
  • Theoretical insight: Provides a rigorous answer to the long‑standing question of whether a “universal” optimal mutation rate exists for the (1+1) EA—demonstrating that no single rate can dominate across all problems.
  • Methodological template: Introduces a technique for engineering fitness landscapes that can be used to test algorithmic parameter sensitivities beyond the (1+1) EA.

Methodology

  1. Problem design – The author defines the DistantSteppingStones function on binary strings of length (n). The landscape consists of a sequence of steps: each step is a large flat plateau (many genotypes share the same fitness) followed by a steep drop (a valley) before the next plateau. The distance between successive plateaus grows with the step index.
  2. Parameter analysis – For a given mutation rate (p), the expected time to jump from one plateau to the next is derived analytically. The analysis hinges on the probability of flipping exactly the right set of bits to cross a valley, which is a function of (p) and the valley width.
  3. Optimality proof – By carefully choosing the sizes of plateaus and valleys as functions of the target mutation rate (p) and tolerance (\varepsilon), the author shows that any deviation from the intended rate leads to a strictly larger expected runtime. This yields the density result.
  4. Mathematical tools – Standard tools from the theory of randomized search heuristics are used: drift analysis, Chernoff bounds, and asymptotic runtime calculations.

The whole construction is deliberately adversarial—it is not meant to model realistic problems but to expose the limits of a one‑size‑fits‑all mutation setting.

Results & Findings

  • Dense optimal set: For every (p) and any (\varepsilon), a DistantSteppingStones instance can be built where the (1+1) EA’s expected optimization time is minimized precisely within ((p-\varepsilon, p+\varepsilon)).
  • No universal rate: Consequently, there is no mutation probability that is asymptotically optimal across all fitness functions; any fixed rate can be outperformed on some specially crafted landscape.
  • Runtime scaling: The expected runtime on the constructed function scales polynomially with (n) when the mutation rate matches the designed optimum, but grows super‑polynomially (or even exponentially) when the rate deviates beyond the (\varepsilon) window.

Practical Implications

  • Parameter tuning is problem‑specific: The result reinforces the industry practice of empirically selecting mutation rates (or using adaptive schemes) rather than relying on a textbook “best” value such as (1/n).
  • Benchmark design: DistantSteppingStones can serve as a stress test for adaptive mutation mechanisms (e.g., self‑adjusting EA, 1/5‑th rule, or reinforcement‑learning‑based controllers). If an adaptive method can identify the hidden optimal rate on this landscape, it gains credibility for more realistic problems.
  • Algorithm portfolios: Since no single rate dominates, developers may consider portfolio approaches—running several EA instances in parallel with different mutation rates and allocating resources dynamically based on early performance signals.
  • Guidance for hybrid methods: The construction highlights scenarios where large, coordinated bit flips are beneficial (to cross valleys). Hybrid algorithms that combine mutation with crossover or local search could mitigate the need for finely tuned mutation probabilities.

Limitations & Future Work

  • Artificial fitness function: DistantSteppingStones is deliberately pathological; real‑world problems rarely exhibit such extreme plateaus and valleys. Hence, the density result does not directly dictate performance on practical benchmarks.
  • Focus on (1+1) EA: The analysis is limited to the simplest elitist EA. Extending the density argument to population‑based EAs, genetic algorithms with crossover, or multi‑objective variants remains open.
  • Static vs. adaptive rates: While the paper shows static rates can be forced to be optimal, it does not evaluate adaptive mutation strategies on the constructed landscapes. Future work could compare self‑adjusting schemes against the theoretical optimum.
  • Scalability of construction: The required plateau/valley sizes grow with the desired precision (\varepsilon), potentially leading to impractically large problem instances for very fine tolerances.

Overall, the paper delivers a clean theoretical message: optimal mutation rates are inherently problem‑dependent, encouraging developers to embrace adaptive or problem‑aware tuning rather than relying on a universal default.

Authors

  • Andrew James Kelley

Paper Information

  • arXiv ID: 2602.18989v1
  • Categories: cs.NE
  • Published: February 22, 2026
  • PDF: Download PDF
0 views
Back to Blog

Related posts

Read more »