[Paper] Investigating the Interplay of Parameterization and Optimizer in Gradient-Free Topology Optimization: A Cantilever Beam Case Study

Published: (January 29, 2026 at 02:09 PM EST)
4 min read
Source: arXiv

Source: arXiv - 2601.22241v1

Overview

This paper explores how the way we encode a structural design (the parameterization) and the choice of gradient‑free optimizer interact when tackling topology optimization (TO) problems. Using a classic cantilever‑beam benchmark, the authors show that a good geometric representation can matter far more than the specific black‑box algorithm you pick, reshaping how engineers should approach automated design loops.

Key Contributions

  • Systematic benchmark of 3 geometric parameterizations (low‑, medium‑, and high‑fidelity) combined with 3 popular black‑box optimizers (Differential Evolution, CMA‑ES, and Heteroscedastic Evolutionary Bayesian Optimization).
  • Dimensionality study covering 10‑, 20‑, and 50‑dimensional design spaces, reflecting realistic TO problem sizes.
  • Quantitative evidence that parameterization quality dominates optimizer performance: a strong representation yields robust results across all algorithms, while a weak one makes the optimizer the limiting factor.
  • Guidelines for practitioners on prioritizing representation design before spending effort on algorithm tuning.
  • Open‑source implementation (released with the paper) enabling reproducibility and easy integration into existing engineering pipelines.

Methodology

  1. Problem definition – Minimize compliance (i.e., maximize stiffness) of a 2‑D cantilever beam while enforcing a connectivity constraint to avoid isolated material islands.
  2. Parameterizations
    • Pixel‑grid (binary): each design variable toggles material presence in a fixed grid.
    • Morphology‑aware (continuous density): uses a smooth density field with a low‑pass filter to control feature size.
    • Shape‑function (compact): encodes the beam shape via a set of control points and spline interpolation, drastically reducing dimensionality.
  3. Optimizers
    • Differential Evolution (DE) – classic population‑based mutation/crossover.
    • Covariance Matrix Adaptation Evolution Strategy (CMA‑ES) – adapts a multivariate Gaussian search distribution.
    • Heteroscedastic Evolutionary Bayesian Optimization (HEBO) – surrogate‑based, modeling varying noise levels across the design space.
  4. Experimental setup – For each (parameterization, optimizer, dimension) triple, 30 independent runs were executed, each limited to a fixed budget of finite‑element simulations (≈ 5 k evaluations). Performance was measured by final compliance and convergence speed.
  5. Statistical analysis – Non‑parametric tests (Kruskal‑Wallis + post‑hoc Dunn) assessed the significance of differences across configurations.

Results & Findings

ParameterizationBest‑performing optimizer (10 D)20 D50 D
Pixel‑gridDE (significantly better)CMA‑ES (marginal)HEBO (no clear winner)
Morphology‑awareCMA‑ES (consistent)CMA‑ESCMA‑ES
Shape‑functionAny optimizer (statistically similar)AnyAny
  • Parameterization impact: The shape‑function representation (compact, smooth) consistently delivered the lowest compliance regardless of optimizer, even in 50‑D problems.
  • Optimizer impact: When using the pixel‑grid (high‑dim, noisy landscape), DE outperformed the others, but the performance gap shrank dramatically with the morphology‑aware and shape‑function encodings.
  • Convergence speed: Compact representations reached near‑optimal compliance in ~30 % of the evaluation budget compared to dense pixel grids.
  • Statistical significance: Across all dimensions, the effect size of parameterization > optimizer (Cohen’s d ≈ 1.2 vs. 0.4).

Practical Implications

  • Design‑first mindset: Engineers should invest time in crafting a good geometric encoding (e.g., using spline‑based shape functions or filtered density fields) before experimenting with sophisticated optimizers.
  • Algorithm selection simplified: With a strong parameterization, even simple, well‑understood optimizers like DE become competitive, reducing the need for expensive surrogate models.
  • Reduced computational budget: Compact representations cut the number of required finite‑element analyses, translating directly into cost savings for large‑scale TO projects (e.g., aerospace wing ribs, automotive chassis components).
  • Integration into CI/CD pipelines: Because the optimizer choice matters less, TO can be wrapped into automated design‑verification loops that run nightly on modest compute clusters.
  • Open‑source tooling: The authors’ codebase (Python + PyTorch for the surrogate) can be dropped into existing CAD‑FEA workflows, enabling rapid prototyping of new parameterizations.

Limitations & Future Work

  • 2‑D benchmark only – Results may not fully extrapolate to 3‑D TO problems where memory and simulation costs explode.
  • Single objective & constraint – Only compliance minimization with a connectivity constraint was examined; multi‑objective or stress‑based formulations could behave differently.
  • Fixed simulation fidelity – The study used a single mesh resolution; adaptive meshing could interact with parameterization quality.
  • Future directions suggested by the authors include extending the analysis to 3‑D structures, exploring learned (e.g., VAE‑based) parameterizations, and testing additional BBO algorithms such as Neuroevolution or Reinforcement‑Learning‑guided search.

Authors

  • Jelle Westra
  • Iván Olarte Rodríguez
  • Niki van Stein
  • Thomas Bäck
  • Elena Raponi

Paper Information

  • arXiv ID: 2601.22241v1
  • Categories: cs.NE, cs.CE
  • Published: January 29, 2026
  • PDF: Download PDF
Back to Blog

Related posts

Read more »