[Paper] An Efficient Evolutionary Algorithm for Few-for-Many Optimization
Source: arXiv - 2601.06387v1
Overview
The paper introduces SoM‑EMOA, a new evolutionary algorithm designed for few‑for‑many (F4M) optimization—a setting where a tiny set of solutions must effectively cover dozens or even hundreds of conflicting objectives. By focusing on representative solutions rather than exhaustive Pareto‑front coverage, the authors tackle the scalability bottleneck that plagues traditional many‑objective optimizers.
Key Contributions
- A dedicated F4M algorithm: SoM‑EMOA (based on a $(\mu!+!1)$ evolution strategy) explicitly optimizes the F4M objective, extending the well‑known SMS‑EMOA framework.
- A flexible benchmark suite: Leveraging the equivalence between the R2 indicator and F4M formulations, the authors provide a systematic way to turn any existing multi‑objective problem into an F4M instance via weighted Tchebycheff scalarization.
- Empirical superiority: Extensive experiments on the new benchmark (including up to 100 objectives) show SoM‑EMOA consistently outperforms state‑of‑the‑art many‑objective algorithms, especially as the number of objectives grows.
- Open‑source implementation: The algorithm and benchmark generators are released on GitHub, enabling immediate reproducibility and community extensions.
Methodology
- Problem Formulation – F4M optimization seeks a few (e.g., 5–10) solutions that minimize the maximum regret across all possible weight vectors, effectively guaranteeing good performance for any user‑specified trade‑off.
- Algorithm Core – SoM‑EMOA follows a classic $(\mu!+!1)$ evolutionary loop:
- Population of $\mu$ individuals is maintained.
- One offspring is generated each generation via mutation (Gaussian perturbation) and optional crossover.
- Selection uses a F4M‑aware fitness: the algorithm computes the R2 indicator (a weighted sum of Tchebycheff scalarizations) and removes the individual whose removal causes the smallest degradation in the indicator, thereby preserving the most “representative” solutions.
- Benchmark Generation – Any multi‑objective test problem (DTLZ, WFG, etc.) is transformed by sampling a large set of weight vectors, evaluating the weighted Tchebycheff scalarization for each, and then using the resulting scalar values as the basis for the R2/F4M indicator. This yields a plug‑and‑play suite that can scale to arbitrary objective counts.
Results & Findings
| Metric | SoM‑EMOA | SMS‑EMOA | NSGA‑III | MOEA/D | Comment |
|---|---|---|---|---|---|
| R2/F4M indicator (lower is better) | Best on 85 % of instances (up to 100 objectives) | 2nd best | 3rd | 4th | Gains grow with objective dimensionality |
| Runtime (seconds) | Comparable to SMS‑EMOA; modest overhead for indicator computation | Baseline | Higher (due to reference‑point handling) | Similar | Indicator can be efficiently updated incrementally |
| Solution diversity (spread) | Maintains a well‑dispersed set of representatives | Slightly less diverse | Over‑diverse (many redundant points) | Balanced | Fewer solutions needed, so diversity is measured among a tiny set |
Key take‑aways
- Scalability – SoM‑EMOA’s performance advantage becomes pronounced beyond ~30 objectives, where traditional algorithms either stall or produce bloated solution sets.
- Stability – The algorithm shows low variance across 30 independent runs, indicating reliable convergence to a high‑quality representative set.
Practical Implications
- Decision‑support systems: In domains like cloud resource allocation, portfolio optimization, or automotive design, stakeholders often need a handful of “policy” options that work well for many possible preference weightings. SoM‑EMOA can generate those options quickly, reducing the cognitive load on decision makers.
- Real‑time or embedded optimization: Because the algorithm maintains a small population and evaluates a lightweight indicator, it fits scenarios with tight compute budgets (e.g., edge devices tuning hyper‑parameters on‑the‑fly).
- Benchmarking and tooling: The provided test‑suite lets developers benchmark their own many‑objective solvers under the F4M lens, encouraging the community to adopt a more user‑centric evaluation metric rather than raw Pareto coverage.
- Integration with AutoML pipelines: AutoML frameworks can embed SoM‑EMOA to produce a concise set of model‑hyperparameter configurations that balance accuracy, latency, memory, and fairness—all at once.
Limitations & Future Work
- Indicator cost – Although the R2/F4M indicator is updated efficiently, its computation still scales with the number of sampled weight vectors; extremely large weight sets could become a bottleneck.
- Fixed population size – The $(\mu!+!1)$ scheme assumes a static $\mu$. Adaptive population sizing (e.g., increasing $\mu$ when the objective landscape is highly multimodal) might further improve robustness.
- Theoretical guarantees – The paper provides empirical evidence but lacks formal convergence proofs for the F4M objective; future work could explore provable bounds.
- Real‑world case studies – Demonstrating SoM‑EMOA on industrial datasets (e.g., supply‑chain optimization) would solidify its practical relevance and uncover domain‑specific challenges.
If you’re curious to try the algorithm yourself, the authors have made the source code publicly available at github.com/MOL-SZU/SoM-EMOA. Feel free to clone the repo, run the benchmark suite, and experiment with integrating SoM‑EMOA into your own multi‑objective pipelines. Happy optimizing!
Authors
- Ke Shang
- Hisao Ishibuchi
- Zexuan Zhu
- Qingfu Zhang
Paper Information
- arXiv ID: 2601.06387v1
- Categories: cs.NE
- Published: January 10, 2026
- PDF: Download PDF