[Paper] Variable Search Stepsize for Randomized Local Search in Multi-Objective Combinatorial Optimization

Published: (February 5, 2026 at 08:59 AM EST)
4 min read
Source: arXiv

Source: arXiv - 2602.05675v1

Overview

The paper introduces Variable Stepsize Randomized Local Search (VS‑RLS), a lightweight yet powerful algorithm for tackling multi‑objective combinatorial optimization problems (MOCOPs). By dynamically adjusting the neighbourhood size during the search, VS‑RLS bridges the gap between broad exploration and fine‑grained exploitation, delivering performance that rivals—or even surpasses—state‑of‑the‑art multi‑objective evolutionary algorithms (MOEAs) on a variety of benchmark problems.

Key Contributions

  • Dynamic stepsize mechanism: A simple rule that shrinks the neighbourhood radius over time, allowing the algorithm to start with a global search and gradually focus on local refinements.
  • Algorithmic simplicity: VS‑RLS requires only a random solution archive and a neighbourhood operator; no complex population management or Pareto‑ranking structures are needed.
  • Extensive empirical validation: Benchmarks across several classic MOCOPs (e.g., multi‑objective knapsack, traveling salesman, set covering) show consistent improvements over fixed‑step local search and competitive MOEAs.
  • Generalizability: The stepsize schedule is problem‑agnostic and can be plugged into existing randomized local search frameworks with minimal code changes.

Methodology

  1. Initialisation – Randomly sample a solution and store it in an archive that tracks all non‑dominated solutions discovered so far.
  2. Variable stepsize schedule – Define a decreasing function s(t) (e.g., exponential decay or linear reduction) that maps the current iteration t to a neighbourhood radius. Early iterations use a large s(t) (wide neighbourhood), later iterations use a small s(t) (tight neighbourhood).
  3. Randomised local move – At each iteration:
    • Pick a solution uniformly from the archive.
    • Generate a neighbour by applying a problem‑specific mutation limited by the current stepsize s(t).
    • Evaluate the neighbour on all objectives.
    • If the neighbour is non‑dominated with respect to the archive, add it (and optionally prune dominated archive members).
  4. Termination – Stop after a preset budget of evaluations or when the stepsize reaches a minimal threshold.

The core idea is analogous to “cooling” in simulated annealing, but instead of a temperature that controls acceptance probability, VS‑RLS controls how far a move can reach in the solution space.

Results & Findings

BenchmarkVS‑RLS vs. Fixed‑Step RLSVS‑RLS vs. Leading MOEAs
Multi‑objective knapsack (30 items)↑ 12 % hypervolume improvementComparable or better on 4/5 instances
Multi‑objective TSP (50 cities)↑ 9 % IGD reductionOutperformed NSGA‑II and MOEA/D on 3/4 instances
Set covering (100 elements)↑ 15 % spread metricMatched MOEA/D‑DE on most runs
  • Convergence speed: VS‑RLS reaches high‑quality Pareto fronts in roughly half the evaluations required by the best MOEAs.
  • Robustness: Performance variance across 30 independent runs is significantly lower than that of MOEAs, indicating stable behaviour.
  • Scalability: The algorithm’s runtime grows linearly with the number of decision variables, making it suitable for larger combinatorial instances where MOEAs become computationally expensive.

Practical Implications

  • Fast prototyping: Developers can embed VS‑RLS into existing optimisation pipelines with a few lines of code, avoiding the overhead of maintaining large populations or complex selection mechanisms.
  • Resource‑constrained environments: Because VS‑RLS needs only a modest archive and a simple neighbourhood operator, it fits well on edge devices, embedded systems, or cloud functions with tight CPU/memory budgets.
  • Hybrid optimisation: VS‑RLS can serve as a pre‑processor that quickly generates a high‑quality initial Pareto archive for downstream MOEAs, potentially reducing the total optimisation time.
  • Industry use‑cases: Scheduling (e.g., job‑shop, vehicle routing), portfolio selection, and network design problems often involve discrete decisions and multiple objectives; VS‑RLS offers a low‑maintenance alternative that still delivers competitive trade‑off solutions.

Limitations & Future Work

  • Neighbourhood design dependence: The effectiveness of VS‑RLS hinges on having a well‑defined, problem‑specific neighbourhood operator; poorly chosen moves can diminish the benefits of variable stepsizes.
  • Stepsize schedule tuning: While the authors provide generic decay functions, the optimal schedule may still require empirical tuning for highly irregular landscapes.
  • Theoretical guarantees: The paper offers empirical evidence but lacks formal convergence proofs or runtime complexity bounds for arbitrary MOCOPs.
  • Future directions: Extending VS‑RLS to handle dynamic objectives, integrating adaptive schedule learning (e.g., reinforcement‑learning‑based stepsize control), and exploring parallel archive updates for massive‑scale problems.

Authors

  • Xuepeng Ren
  • Maocai Wang
  • Guangming Dai
  • Zimin Liang
  • Qianrong Liu
  • Shengxiang Yang
  • Miqing Li

Paper Information

  • arXiv ID: 2602.05675v1
  • Categories: cs.NE
  • Published: February 5, 2026
  • PDF: Download PDF
Back to Blog

Related posts

Read more »