[Paper] PHWSOA: A Pareto-based Hybrid Whale-Seagull Scheduling for Multi-Objective Tasks in Cloud Computing

Published: (December 10, 2025 at 07:01 AM EST)
3 min read
Source: arXiv

Source: arXiv - 2512.09568v1

Overview

The paper introduces PHWSOA, a new hybrid meta‑heuristic that blends Whale Optimization (WOA) and Seagull Optimization (SOA) with Pareto‑based multi‑objective handling. By tackling makespan, VM load‑balancing, and cost together, the authors aim to give cloud operators a more balanced, real‑world scheduler than classic single‑metric approaches.

Key Contributions

  • Hybrid algorithm design – merges WOA’s global search with SOA’s local exploitation, overcoming each method’s individual weaknesses.
  • Pareto‑driven multi‑objective framework – simultaneously optimizes three conflicting goals (makespan, load balance, economic cost) using dominance ranking.
  • Halton sequence initialization – seeds the population with low‑discrepancy samples for better diversity and faster convergence.
  • Pareto‑guided mutation – injects diversity based on non‑dominated solutions to avoid premature stagnation.
  • Parallel execution & dynamic VM load redistribution – speeds up the search and continuously re‑balances workloads during runtime.
  • Extensive CloudSim evaluation – uses NASA‑iPSC and HPC2N real traces, showing up to 72 % makespan reduction, 37 % better load balance, and 24 % cost savings over state‑of‑the‑art baselines (WOA, GA, PEWOA, GCWOA).

Methodology

  1. Population seeding – Instead of random starts, the algorithm draws initial candidate schedules from a Halton sequence, which spreads points uniformly across the search space.
  2. Hybrid search operators
    • Whale phase: mimics bubble‑net feeding to explore broadly across VM‑task assignments.
    • Seagull phase: uses a “flocking” update rule to fine‑tune promising schedules locally.
  3. Pareto ranking – Each candidate is evaluated on the three objectives; non‑dominated solutions form the current Pareto front.
  4. Mutation guided by Pareto front – Solutions near the front receive targeted perturbations, encouraging exploration of under‑examined regions.
  5. Parallel evaluation – Fitness calculations for all candidates run concurrently (leveraging multi‑core CPUs), cutting wall‑clock time.
  6. Dynamic load redistribution – During simulation, if a VM becomes overloaded, tasks are migrated to under‑utilized VMs based on the current Pareto front, keeping balance in check.

Results & Findings

  • Makespan: PHWSOA cuts total execution time by up to 72.1 % compared with classic WOA and GA.
  • Load balancing: The variance of VM utilization drops by 36.8 %, meaning tasks are spread more evenly across the cloud pool.
  • Cost: By reducing both execution time and over‑provisioned resources, the algorithm saves 23.5 % in monetary cost.
  • Convergence speed: Parallel processing and the Halton start reduce the number of generations needed to reach near‑optimal fronts by roughly 30 %.
  • Robustness: Across two distinct real‑world workloads (NASA‑iPSC, HPC2N), PHWSOA consistently outperforms the baselines, indicating good generalization.

Practical Implications

  • Cloud orchestration platforms (e.g., OpenStack, Kubernetes) can embed PHWSOA as a plug‑in scheduler to automatically balance latency, resource usage, and billing.
  • DevOps pipelines that spin up temporary VMs for batch jobs (CI builds, data‑processing) can achieve faster turnaround and lower cloud spend without manual tuning.
  • Edge‑cloud hybrid deployments benefit from the dynamic load‑redistribution component, allowing workloads to shift between edge nodes and central clouds based on real‑time load.
  • SLA‑aware services can use the Pareto front to pick a schedule that meets a specific trade‑off (e.g., prioritize cost over latency during off‑peak hours).
  • The algorithm’s parallel nature makes it suitable for integration into existing autoscaling controllers that already run on multi‑core management nodes.

Limitations & Future Work

  • Scalability to massive clusters – Experiments capped at a few hundred VMs; performance on thousands of nodes remains untested.
  • Static workload assumption – While the method includes a dynamic redistribution step, the primary optimization runs on a fixed batch of tasks; continuous streaming workloads need further adaptation.
  • Parameter sensitivity – The hybrid algorithm introduces several hyper‑parameters (e.g., mutation rate, balance between whale and seagull phases) that may require domain‑specific tuning.
  • Energy considerations – The current cost model focuses on monetary expense; future extensions could incorporate power consumption for greener cloud operations.

Overall, PHWSOA offers a compelling blend of theoretical rigor and practical gains, positioning it as a strong candidate for next‑generation cloud schedulers.

Authors

  • Zhi Zhao
  • Hang Xiao
  • Wei Rang

Paper Information

  • arXiv ID: 2512.09568v1
  • Categories: cs.DC
  • Published: December 10, 2025
  • PDF: Download PDF
Back to Blog

Related posts

Read more »