[Paper] Random is Faster than Systematic in Multi-Objective Local Search
Source: arXiv - 2601.06318v1
Overview
This paper challenges a long‑standing assumption in multi‑objective local search: that exhaustively scanning a solution’s neighbourhood (systematic exploration) is always more efficient than picking a single neighbour at random. Through extensive experiments and a fresh theoretical lens, the authors show that random sampling consistently outperforms systematic strategies, even when the latter use classic “best‑improvement” or “first‑improvement” rules.
Key Contributions
- Empirical evidence across diverse multi‑objective benchmark problems that random neighbour selection is faster than systematic exploration.
- Intuitive toy‑example analysis that links the advantage of randomness to the distribution of “good” neighbours in the search space.
- Statistical characterization of the number of improving neighbours during the search, revealing a predictable probability distribution.
- Theoretical proof (under mild assumptions) that random sampling yields lower expected runtime than systematic methods, independent of the improvement rule used.
- Practical guidelines for algorithm designers on when to favor random sampling in multi‑objective local search pipelines.
Methodology
-
Algorithmic Setup – The authors implement two families of multi‑objective local search algorithms:
- Systematic: either best‑improvement (evaluate the whole neighbourhood) or first‑improvement (stop after the first improvement).
- Random: evaluate exactly one randomly chosen neighbour per iteration.
Both families maintain a non‑dominated archive and repeatedly pick a solution from it to explore.
-
Benchmark Suite – A collection of standard multi‑objective combinatorial problems (e.g., multi‑objective knapsack, traveling salesman, and flowshop) with varying numbers of objectives and neighbourhood sizes.
-
Performance Metrics – Runtime (CPU time) to reach a predefined quality threshold in the archive, and the number of evaluations required.
-
Statistical Analysis – For each run, the authors record how many neighbours are “good” (i.e., lead to a non‑dominated update). They fit these counts to a Poisson‑like distribution, showing that the probability of finding a good neighbour is low but stable throughout the search.
-
Theoretical Modeling – Using the observed distribution, they derive expected runtimes for systematic vs. random strategies, proving that the extra overhead of scanning many neighbours outweighs the occasional benefit of finding a better one early.
Results & Findings
| Problem | Systematic (Best‑Imp) | Systematic (First‑Imp) | Random Sampling |
|---|---|---|---|
| Multi‑obj. Knapsack (2 obj.) | 1.8× slower on average | 1.5× slower | – |
| Multi‑obj. TSP (3 obj.) | 2.2× slower | 1.7× slower | – |
| Multi‑obj. Flowshop (4 obj.) | 1.9× slower | 1.4× slower | – |
- Consistent Speedup: Random sampling required 30‑50 % fewer evaluations and 20‑40 % less wall‑clock time across all testbeds.
- Distribution Insight: The number of improving neighbours per solution follows a geometric distribution with a small success probability (≈ 0.05–0.1). This makes exhaustive search wasteful.
- Theoretical Confirmation: Expected runtime for systematic search grows linearly with neighbourhood size, while random sampling’s expected runtime grows only with the inverse of the success probability—yielding a provable advantage when neighbourhoods are large.
Practical Implications
- Algorithm Design: When implementing multi‑objective local search (e.g., for scheduling, routing, or resource allocation), prefer random neighbour sampling unless you have strong evidence that good neighbours are densely clustered.
- Scalability: For problems with huge neighbourhoods (common in combinatorial settings), random sampling dramatically reduces CPU usage, enabling real‑time or near‑real‑time decision support.
- Hybrid Strategies: The findings suggest a budget‑aware hybrid—start with random sampling to quickly explore, then switch to systematic exploration only if the observed success rate of random picks rises above a threshold.
- Tooling: Existing libraries (e.g., jMetal, MOEA Framework) can expose a simple “random‑neighbour” operator, making it easy for developers to experiment without redesigning the whole search loop.
- Parallelism: Random sampling is naturally parallelizable; multiple workers can evaluate independent random neighbours concurrently, further accelerating convergence.
Limitations & Future Work
- Assumption of Independent Neighbour Quality: The theoretical model treats neighbour quality as independent draws, which may not hold for highly structured problems where good neighbours are correlated.
- Static Success Probability: The analysis assumes a roughly constant probability of improvement; in practice this probability can decay as the archive converges, potentially narrowing the gap between strategies.
- Scope of Benchmarks: While the paper covers several classic combinatorial problems, it does not test large‑scale real‑world instances (e.g., vehicle routing with thousands of customers).
- Future Directions: Extending the analysis to dynamic neighbourhoods, exploring adaptive sampling rates, and integrating the insights into multi‑objective metaheuristics (e.g., NSGA‑II, MOEA/D) are promising avenues.
Bottom line: For developers building multi‑objective local search solvers, a simple random neighbour pick isn’t just a lazy shortcut—it’s a provably faster, more scalable strategy in most settings. Embrace randomness, and you’ll likely shave off a substantial chunk of runtime without sacrificing solution quality.
Authors
- Zimin Liang
- Miqing Li
Paper Information
- arXiv ID: 2601.06318v1
- Categories: cs.NE
- Published: January 9, 2026
- PDF: Download PDF