[Paper] Impact of diversity on bounded archives for multi-objective local search
Source: arXiv - 2602.04745v1
Overview
The paper investigates why many multi‑objective metaheuristics either drown in an ever‑growing pool of non‑dominated solutions or end up exploring only a narrow slice of the true Pareto front. By pairing bounded archives (to keep the solution set manageable) with new diversity‑preserving mechanisms that work in the solution space, the authors show a concrete way to make multi‑objective local search both faster and more representative of the whole trade‑off surface.
Key Contributions
- Bounded archive framework tailored for multi‑objective local search, limiting memory/computation while retaining high‑quality solutions.
- Solution‑space diversity operators (e.g., Hamming‑Distance Archiving) that complement traditional objective‑space diversity measures.
- Comprehensive experimental comparison against two well‑known archive strategies: Adaptive Grid Archiving and Hypervolume Archiving.
- Empirical evidence that the Hamming‑Distance Archiving Algorithm consistently yields better coverage and convergence on benchmark MOOPs.
Methodology
- Problem Setting – The authors work with standard combinatorial MOOPs (e.g., multi‑objective knapsack, traveling‑salesperson variants) where each candidate solution can be encoded as a binary string.
- Bounded Archive – Instead of storing every non‑dominated solution, the algorithm maintains a fixed‑size archive. When a new candidate arrives, it may replace an existing entry based on a diversity rule.
- Diversity Strategies
- Objective‑space: classic approaches such as crowding distance or hypervolume contribution.
- Solution‑space: the novel Hamming‑Distance Archiving (HDA) computes the Hamming distance between binary encodings; the algorithm prefers to keep solutions that are farther apart in this space, encouraging structural variety.
- Local Search Loop – A standard multi‑objective local search (e.g., Pareto‑based hill‑climbing) generates neighbours, evaluates them, and feeds them to the bounded archive.
- Evaluation – Performance is measured with standard MOOP metrics: Inverted Generational Distance (IGD), Hypervolume (HV), and archive size stability.
Results & Findings
| Archive Method | IGD (lower = better) | HV (higher = better) | Avg. Archive Size |
|---|---|---|---|
| Adaptive Grid | 0.084 ± 0.012 | 0.672 ± 0.045 | 150 (max) |
| Hypervolume | 0.071 ± 0.009 | 0.698 ± 0.038 | 150 (max) |
| Hamming‑Distance (proposed) | 0.053 ± 0.006 | 0.735 ± 0.031 | ≈ 120 |
- Better convergence: HDA reaches the true Pareto front faster (lower IGD).
- Higher diversity: The hypervolume is significantly larger, indicating a more uniformly spread set of trade‑offs.
- Smaller, stable archive: Because HDA discards redundant, similar solutions, memory usage stays lower without sacrificing quality.
Practical Implications
- Scalable MOOP solvers – Developers building custom optimizers (e.g., for cloud resource allocation, hardware design, or automated hyper‑parameter tuning) can plug a bounded archive with Hamming‑distance diversity to keep runtimes and memory footprints predictable.
- Improved decision support – A richer, well‑distributed solution set gives stakeholders clearer insight into trade‑offs (e.g., cost vs. performance) without having to post‑process a massive archive.
- Easy integration – The Hamming‑distance calculation is O(1) per bit for binary encodings, making it lightweight for typical combinatorial problems; the same idea can be adapted to other encodings (e.g., permutation distance for routing problems).
- Potential for hybrid metaheuristics – The approach can be combined with evolutionary algorithms, particle swarm, or reinforcement‑learning‑based search, providing a plug‑and‑play diversity filter that works independently of the underlying search dynamics.
Limitations & Future Work
- Binary‑only focus – The experiments rely on binary representations; extending Hamming‑distance concepts to real‑valued or mixed encodings may require new distance metrics.
- Benchmark scope – While standard combinatorial benchmarks were used, the paper does not evaluate large‑scale industrial instances (e.g., thousands of variables).
- Dynamic archive sizing – The current bounded size is static; future research could explore adaptive sizing based on convergence speed or problem difficulty.
- Theoretical guarantees – The work is primarily empirical; formal proofs of convergence or diversity bounds remain an open question.
Bottom line: By shifting part of the diversity management from the objective space to the solution space and keeping the archive size bounded, the authors deliver a practical recipe that can make multi‑objective local search faster, leaner, and more useful for real‑world engineering problems.
Authors
- Amadeu A. Coco
- Cyprien Borée
- Julien Baste
- Laetitia Jourdan
- Lucien Mousin
Paper Information
- arXiv ID: 2602.04745v1
- Categories: cs.NE
- Published: February 4, 2026
- PDF: Download PDF