[Paper] Clustering-based Transfer Learning for Dynamic Multimodal MultiObjective Evolutionary Algorithm
Source: arXiv - 2512.18947v1
Overview
The paper tackles a tough problem at the intersection of dynamic optimization, multimodal search, and multi‑objective evolution. In real‑world systems—think adaptive network routing, evolving game AI, or continuously changing manufacturing schedules—optimal solutions shift over time and often exist in many equivalent “flavors.” The authors introduce a new benchmark suite and a clustering‑driven autoencoder that together keep evolutionary algorithms both diverse and convergent as the environment changes.
Key Contributions
- Dynamic multimodal benchmark suite: 12 test functions that blend time‑varying dynamics with multiple Pareto‑optimal manifolds, giving researchers a realistic evaluation playground.
- Clustering‑based Autoencoder (CAE) predictor: Learns a compact representation of previously discovered solution clusters and generates a highly diverse initial population after each environmental change.
- Adaptive niching within a static optimizer: Dynamically adjusts niche sizes to balance convergence (getting close to the Pareto front) and diversity (covering all equivalent solution sets).
- Comprehensive empirical study: Shows the proposed algorithm outperforms leading dynamic multi‑objective and multimodal multi‑objective evolutionary algorithms on both decision‑space diversity and objective‑space convergence.
Methodology
- Problem Formalization – The authors define a dynamic multimodal multi‑objective optimization problem (DMMOP) where the objective functions and the shape/number of Pareto‑optimal sets evolve over discrete time steps.
- Benchmark Construction – Existing static multimodal and dynamic test suites are combined. Each benchmark varies parameters (e.g., landscape rotation, peak movement) to simulate realistic drift.
- Clustering‑based Autoencoder (CAE)
- Clustering: After each generation, the current population is grouped using a density‑based clustering algorithm (e.g., DBSCAN). Each cluster corresponds to a distinct modality (a “copy” of the Pareto set).
- Autoencoder Training: For every cluster, a shallow autoencoder is trained on the decision‑variable vectors. The encoder compresses the cluster’s geometry, while the decoder learns to reconstruct diverse samples from the latent space.
- Prediction & Re‑initialization: When the environment changes, the trained decoders generate a fresh, diverse set of candidate solutions that respect the learned cluster structures, seeding the next evolutionary run.
- Static Optimizer with Adaptive Niching – A conventional multi‑objective evolutionary algorithm (e.g., NSGA‑III) runs on the predicted population. An adaptive niching mechanism monitors crowding distance and dynamically expands or contracts niches to keep both convergence pressure and diversity.
- Evaluation Metrics – Standard indicators (IGD, HV) assess convergence, while decision‑space diversity is measured via clustering purity and spread metrics.
Results & Findings
- Decision‑Space Diversity: The CAE‑driven approach maintains a higher number of distinct clusters throughout runs, reducing premature convergence to a single modality.
- Objective‑Space Convergence: IGD and HV scores improve by 15‑30 % over the best competing dynamic algorithms across all 12 benchmarks.
- Adaptation Speed: After a change, the autoencoder can regenerate a viable population within 2–3 generations, whereas baseline methods need 5–8 generations to recover diversity.
- Scalability: Experiments with up to 30 decision variables show modest computational overhead (autoencoder training adds ~10 % runtime) but yields consistent performance gains.
Practical Implications
- Adaptive Systems: Engineers building self‑optimizing services (e.g., cloud resource allocation, autonomous vehicle path planning) can embed the CAE predictor to quickly re‑populate solution pools after a shift in workload or environment.
- Game Development & Procedural Content: Dynamic level‑design algorithms can keep multiple viable design “styles” alive, enabling richer, non‑repetitive content generation as player behavior evolves.
- Manufacturing & Supply‑Chain: When demand patterns or machine availability change, the method can instantly propose diverse production schedules that respect multiple optimal trade‑offs (cost vs. lead time).
- Tooling: The benchmark suite itself offers a ready‑made testbed for anyone developing new dynamic optimization libraries, helping to avoid over‑fitting to static or single‑modality scenarios.
Limitations & Future Work
- Model Complexity: The autoencoder is shallow and may struggle with highly non‑linear manifolds in very high‑dimensional spaces; deeper architectures could be explored.
- Clustering Sensitivity: Performance depends on the choice of clustering algorithm and its hyper‑parameters; automatic tuning mechanisms are needed for fully autonomous deployment.
- Real‑World Validation: Experiments are confined to synthetic benchmarks; applying the approach to a live system (e.g., network traffic routing) would test robustness under noisy, partial‑information conditions.
- Hybrid Transfer Learning: Future research could combine the CAE predictor with other transfer‑learning techniques (e.g., meta‑learning) to further reduce adaptation latency.
Authors
- Li Yan
- Bolun Liu
- Chao Li
- Jing Liang
- Kunjie Yu
- Caitong Yue
- Xuzhao Chai
- Boyang Qu
Paper Information
- arXiv ID: 2512.18947v1
- Categories: cs.AI, cs.NE
- Published: December 22, 2025
- PDF: Download PDF