[Paper] TESO Tabu Enhanced Simulation Optimization for Noisy Black Box Problems

Published: (December 30, 2025 at 01:03 AM EST)
4 min read
Source: arXiv

Source: arXiv - 2512.24007v1

Overview

Simulation‑based optimization is a workhorse for many engineering and operations problems, but noisy evaluations and expensive simulations make it hard to find good solutions quickly. The paper introduces Tabu‑Enhanced Simulation Optimization (TESO), a new metaheuristic that blends adaptive search with memory‑based mechanisms (short‑term Tabu List and long‑term Elite Memory) to keep the search both diverse and focused, even when the objective function is stochastic.

Key Contributions

  • Hybrid metaheuristic that couples classic Tabu Search ideas with stochastic simulation optimization.
  • Two‑level memory system:
    • Short‑term Tabu List prevents immediate revisiting of recent solutions, reducing cycling.
    • Long‑term Elite Memory stores high‑quality solutions and perturbs them to intensify the search.
  • Aspiration criterion that lets the algorithm override tabu status when a candidate is exceptionally promising.
  • Open‑source implementation (Python) and reproducible benchmark data released on GitHub.
  • Empirical validation on a realistic queue‑optimization case study, showing statistically significant gains over standard SO baselines.

Methodology

  1. Problem Setting – The authors treat the optimizer as a black‑box: each candidate solution is evaluated by running a stochastic simulation (e.g., a queuing model) that returns a noisy performance estimate.
  2. Search Loop – Starting from a random solution, TESO iteratively generates a neighbourhood (small perturbations).
  3. Tabu List – The most recent moves are recorded in a fixed‑size list; any move that would recreate a tabu‑marked solution is normally rejected, forcing the algorithm to explore new regions.
  4. Elite Memory – The best‑performing solutions seen so far are kept in a separate archive. Periodically, the algorithm selects an elite, applies a stronger perturbation, and injects the result back into the population, steering the search toward promising basins.
  5. Aspiration – If a tabu‑blocked move yields a solution better than any elite, the tabu restriction is lifted, ensuring that breakthroughs are not missed.
  6. Stopping Criteria – The process stops after a preset budget of simulation runs or when improvements plateau.

The design is deliberately lightweight: it requires only the ability to generate neighbours and evaluate them, making it easy to plug into existing simulation pipelines.

Results & Findings

  • Queue‑optimization benchmark: TESO achieved a 12‑15 % reduction in average queue length compared with a vanilla stochastic gradient descent and a 7‑9 % improvement over a standard Tabu Search without elite memory.
  • Robustness to noise: Across multiple noise levels (variance injected into the simulation), TESO’s performance degradation was modest, whereas baseline methods deteriorated sharply.
  • Statistical significance: Paired t‑tests (α = 0.05) confirmed that the gains were not due to random chance.
  • Ablation study: Removing either the Tabu List or the Elite Memory caused a noticeable drop in solution quality, highlighting the complementary role of diversification (Tabu) and intensification (Elite).

Practical Implications

  • Plug‑and‑play optimizer – Developers can drop TESO into any existing simulation‑based workflow (e.g., supply‑chain simulators, network traffic models, manufacturing line studies) without rewriting the simulation code.
  • Reduced simulation budget – By avoiding redundant evaluations and focusing on elite regions, teams can achieve comparable or better results with fewer costly simulation runs, translating into lower cloud compute costs.
  • Better handling of stochasticity – Industries that rely on Monte‑Carlo or discrete‑event simulations (finance, logistics, telecom) can benefit from TESO’s built‑in noise resilience, leading to more reliable decision support.
  • Open‑source foundation – The provided Python package includes utilities for neighbour generation, memory management, and logging, making it straightforward to extend (e.g., custom neighbourhood operators or parallel evaluation).

Limitations & Future Work

  • Scalability to high‑dimensional spaces – The current neighbourhood operator is simple; performance on problems with hundreds of decision variables remains untested.
  • Parameter sensitivity – Tabu list size, elite archive capacity, and perturbation strengths were tuned manually for the case study; automated self‑tuning mechanisms could improve out‑of‑the‑box robustness.
  • Parallelism – The implementation runs simulations sequentially; future versions could exploit parallel simulation runs to accelerate the evaluation bottleneck.
  • Broader benchmark suite – Validation on a wider set of noisy black‑box problems (e.g., hyper‑parameter tuning for machine‑learning models) would strengthen claims of generality.

Overall, TESO offers a practical, memory‑enhanced approach to noisy simulation optimization that can be readily adopted by developers looking to squeeze more performance out of expensive stochastic models.

Authors

  • Bulent Soykan
  • Sean Mondesire
  • Ghaith Rabadi

Paper Information

  • arXiv ID: 2512.24007v1
  • Categories: cs.NE, cs.AI
  • Published: December 30, 2025
  • PDF: Download PDF
Back to Blog

Related posts

Read more »