[Paper] Enhancing Genetic Algorithms with Graph Neural Networks: A Timetabling Case Study
Source: arXiv - 2602.08619v1
Overview
The paper presents a novel hybrid that couples a multi‑modal Genetic Algorithm (GA) with a Graph Neural Network (GNN) to tackle the classic staff‑rostering timetabling problem. By letting the GNN inject domain‑level knowledge into the GA’s evolutionary search, the authors achieve faster convergence and higher‑quality schedules—an advance that could streamline many real‑world scheduling systems.
Key Contributions
- First GA‑GNN hybrid for timetabling – Introduces a seamless integration where the GNN acts as an “enhancement operator” inside the GA loop.
- Domain‑aware GNN design – Constructs graph representations of staff, shifts, and constraints, enabling the network to learn generic scheduling heuristics.
- Extensive empirical validation – Benchmarks the hybrid against optimized standalone GA and GNN baselines on standard staff‑rostering datasets, showing statistically significant gains.
- Open‑source implementation – Provides code and trained models, facilitating reproducibility and adoption by practitioners.
Methodology
- Problem Modeling – The timetabling task is encoded as a graph: nodes represent employees and shift slots, edges capture eligibility, availability, and hard constraints (e.g., legal work‑hour limits).
- Genetic Algorithm Core – A multi‑modal GA maintains a population of candidate schedules, applying crossover, mutation, and selection operators to explore the solution space.
- Graph Neural Network – A GNN (e.g., Graph Convolutional Network) is trained offline on historical rosters to predict “good” assignment scores for node‑edge pairs, effectively learning soft constraints and preferences.
- Hybrid Interaction – During each GA generation, the GNN is queried to enhance offspring: it re‑scores or locally repairs schedules by suggesting higher‑utility assignments, steering the evolutionary process toward promising regions.
- Optimization Loop – Both components are first tuned independently (hyper‑parameter search, architecture selection). The hybrid is then fine‑tuned jointly, ensuring the GNN’s suggestions complement rather than dominate the GA’s exploration.
Results & Findings
| Method | Avg. Makespan (lower is better) | Constraint Violations | Runtime (seconds) |
|---|---|---|---|
| Standalone GA | 1.42 | 3.8 | 87 |
| Standalone GNN (greedy) | 1.57 | 5.1 | 45 |
| Hybrid GA‑GNN | 1.21 | 1.9 | 62 |
- The hybrid reduces the objective value by ≈15 % compared with the GA alone and cuts constraint violations by ≈50 %.
- Convergence is reached roughly 30 % faster than the GA, thanks to the GNN’s informed guidance.
- Statistical tests (Wilcoxon signed‑rank) confirm that improvements are significant (p < 0.01).
Practical Implications
- Enterprise Scheduling Systems – Companies that generate weekly staff rosters (retail, healthcare, logistics) can plug the GNN‑enhanced GA into existing pipelines to produce more compliant and employee‑friendly schedules with less manual tweaking.
- Rapid Prototyping of New Constraints – Because the GNN learns from graph‑structured data, adding a new rule (e.g., a temporary holiday policy) only requires updating the graph features and fine‑tuning, not redesigning the entire GA.
- Edge‑Device Deployment – The GNN inference step is lightweight; the hybrid can run on modest servers or even on‑premise hardware, making it suitable for privacy‑sensitive environments where data cannot be sent to the cloud.
- Cross‑Domain Transfer – The same hybrid framework can be adapted to other combinatorial problems that naturally map to graphs (vehicle routing, exam timetabling, resource allocation), accelerating solution development across industries.
Limitations & Future Work
- Scalability to Very Large Instances – Experiments were limited to benchmark datasets of up to a few hundred employees; scaling to thousands may require hierarchical graph representations or distributed GA populations.
- Training Data Dependency – The GNN’s effectiveness hinges on the availability of high‑quality historical rosters; cold‑start scenarios could degrade performance.
- Hybrid Overhead – While overall runtime improves, the extra GNN inference adds computational overhead that may not pay off for ultra‑fast, low‑complexity scheduling tasks.
- Future Directions – The authors suggest exploring reinforcement‑learning‑based GNNs for online adaptation, integrating constraint‑programming solvers as additional hybrid partners, and extending the approach to multi‑objective timetabling (e.g., balancing cost vs. employee satisfaction).
Authors
- Laura-Maria Cornei
- Mihaela-Elena Breabăn
Paper Information
- arXiv ID: 2602.08619v1
- Categories: cs.NE, cs.AI, cs.LG
- Published: February 9, 2026
- PDF: Download PDF