[Paper] Yukthi Opus: A Multi-Chain Hybrid Metaheuristic for Large-Scale NP-Hard Optimization
Source: arXiv - 2601.01832v1
Overview
The paper introduces Yukthi Opus (YO), a new hybrid metaheuristic that tackles large‑scale NP‑hard optimization problems while respecting a strict evaluation‑budget limit. By blending probabilistic exploration, greedy refinement, and adaptive simulated annealing across multiple parallel search chains, YO delivers high‑quality solutions for notoriously difficult tasks such as the Traveling Salesman Problem (TSP) and multimodal benchmark functions.
Key Contributions
- Two‑phase hybrid architecture that cleanly separates a burn‑in MCMC exploration stage from a refinement loop combining greedy local search and simulated annealing.
- Multi‑chain execution to reduce sensitivity to initial seeds and improve robustness across runs.
- Spatial blacklist that records and avoids re‑evaluating regions known to be poor, conserving the limited evaluation budget.
- Adaptive reheating schedule for simulated annealing, enabling controlled escapes from local minima without blowing up the budget.
- Comprehensive empirical evaluation on Rastrigin (5‑D), Rosenbrock (5‑D), and TSP instances ranging from 50 to 200 cities, with ablation studies that isolate the impact of each component.
- Competitive performance against state‑of‑the‑art optimizers such as CMA‑ES, Bayesian Optimization, and accelerated Particle Swarm Optimization, especially on multimodal and large‑scale problems.
Methodology
-
Burn‑in Phase (MCMC Exploration)
- A set of parallel Markov chains draws candidate solutions from a proposal distribution.
- Acceptance follows the classic Metropolis‑Hastings rule, ensuring a diverse coverage of the search space while staying within the pre‑allocated evaluation budget.
-
Hybrid Optimization Loop
- Greedy Local Search: Each promising candidate is locally refined by iteratively applying problem‑specific move operators (e.g., 2‑opt swaps for TSP) until no immediate improvement is found.
- Simulated Annealing with Adaptive Reheating: After local refinement, a temperature schedule governs occasional uphill moves. When the algorithm stalls, the temperature is “reheated” based on a simple heuristic (e.g., recent acceptance rate), allowing the search to jump out of shallow basins.
-
Spatial Blacklist
- The algorithm maintains a spatial index of regions (e.g., hyper‑cubes) that have yielded low‑quality solutions. Future proposals that fall inside blacklisted zones are rejected outright, saving evaluations.
-
Multi‑Chain Coordination
- Multiple independent chains run in parallel, periodically sharing their best solutions. This cross‑pollination reduces the chance that all chains converge to the same sub‑optimal region.
The entire pipeline is budget‑aware: the number of MCMC steps, local search iterations, and annealing moves are all pre‑computed to guarantee that the total number of expensive objective evaluations never exceeds the user‑specified limit.
Results & Findings
| Benchmark | Budget (evals) | Best‑found objective (YO) | Best competitor | Relative gain |
|---|---|---|---|---|
| Rastrigin (5‑D) | 10 k | 0.0012 | CMA‑ES (0.0035) | ≈ 65 % improvement |
| Rosenbrock (5‑D) | 12 k | 1.04 | Bayesian Opt. (1.27) | ≈ 18 % improvement |
| TSP‑50 | 15 k | 5 842 | PSO‑Accel (5 910) | ≈ 1.2 % improvement |
| TSP‑200 | 30 k | 23 467 | CMA‑ES (23 689) | ≈ 0.9 % improvement |
- Ablation studies reveal that removing the MCMC burn‑in degrades solution quality by ~30 %, while dropping the greedy local search leads to ~45 % worse results.
- Simulated annealing contributes modestly to final objective values but dramatically cuts variance across runs (standard deviation drops from 2.3 % to 0.8 %).
- Multi‑chain execution reduces the probability of catastrophic failure (i.e., getting stuck in a poor basin) from 12 % to under 2 % in the TSP experiments.
Overall, YO matches or exceeds the best published results while guaranteeing that the total number of costly function evaluations stays within the prescribed budget.
Practical Implications
- Black‑Box Optimization Services: Companies that expose expensive simulation APIs (e.g., CFD, financial risk models) can adopt YO to squeeze more performance out of a fixed query budget, reducing cloud costs.
- Automated Hyper‑Parameter Tuning: The budget‑aware nature of YO makes it a drop‑in replacement for Bayesian optimization when the training runs are extremely costly or when a hard evaluation cap is required.
- Routing & Logistics Software: For route‑planning engines that need near‑optimal TSP solutions on the fly (e.g., delivery drones), YO’s multi‑chain parallelism can be mapped to multi‑core or distributed environments, delivering high‑quality routes within strict latency constraints.
- Embedded / Edge Devices: Because YO’s evaluation budget is predetermined, developers can allocate a fixed compute budget on edge hardware (e.g., robotics) and still obtain reliable, high‑quality solutions without runtime surprises.
The algorithm’s modular design (MCMC, greedy, annealing) also lets practitioners swap in domain‑specific move operators or custom proposal distributions, tailoring YO to a wide range of combinatorial and continuous problems.
Limitations & Future Work
- Scalability to Very High Dimensions: Experiments stop at 5‑D continuous spaces; performance in hundreds of dimensions remains untested.
- Blacklisting Overhead: Maintaining the spatial blacklist can become memory‑intensive for extremely large search spaces, potentially offsetting some budget savings.
- Parameter Sensitivity: While the multi‑chain approach mitigates initialization bias, the choice of MCMC proposal variance and reheating thresholds still requires modest tuning.
- Future Directions: The authors suggest extending YO with surrogate models (e.g., Gaussian processes) to further reduce expensive evaluations, exploring adaptive chain‑count strategies, and benchmarking on real‑world industrial datasets (e.g., supply‑chain optimization).
Authors
- SB Danush Vikraman
- Hannah Abagail
- Prasanna Kesavraj
- Gajanan V Honnavar
Paper Information
- arXiv ID: 2601.01832v1
- Categories: cs.NE, cs.AI
- Published: January 5, 2026
- PDF: Download PDF