[Paper] Unifying approach to uniform expressivity of graph neural networks

Published: (February 20, 2026 at 01:18 PM EST)
5 min read
Source: arXiv

Source: arXiv - 2602.18409v1

Overview

The paper introduces Template Graph Neural Networks (T‑GNNs), a unifying framework that captures a wide family of GNN architectures—including many recent “substructure‑aware” variants—by letting nodes aggregate information from graph templates (e.g., cycles, motifs, or any predefined subgraph pattern). The authors also define a matching logical language, Graded Template Modal Logic (GML(T)), and prove that T‑GNNs are exactly as expressive as this logic, giving a clean, theory‑backed way to compare and reason about the power of different GNN designs.

Key Contributions

  • Template GNN (T‑GNN) framework: Formalizes GNNs that aggregate over arbitrary, user‑specified graph templates rather than just immediate neighbours or global read‑outs.
  • Graded Template Modal Logic (GML(T)): A new logical system that mirrors the computation of T‑GNNs, extending classic Weisfeiler‑Leman (WL) and first‑order logic characterizations.
  • Equivalence theorem: Shows that the expressive power of T‑GNNs and GML(T) coincide, establishing a precise correspondence between architecture and logic.
  • Unified expressivity analysis: Demonstrates how existing models—standard AC‑GNNs, higher‑order GNNs, subgraph‑counting GNNs, etc.—fit as special cases of T‑GNNs, providing a single lens to compare them.
  • Template‑based bisimulation & WL extension: Introduces generalized notions of bisimulation and a template‑aware WL algorithm that serve as the theoretical backbone for the equivalence proof.

Methodology

  1. Template definition – A template is any small graph (e.g., a triangle, a 4‑cycle, a star) equipped with its own node/edge attributes. The set of allowed templates, 𝒯, is fixed before training.
  2. Embedding generation – For each node v in the input graph G, the model enumerates all embeddings of each template t ∈ 𝒯 that map t’s distinguished node to v.
  3. Aggregation – Node features are updated by aggregating (e.g., sum, mean, max) the embeddings of all valid template matches, optionally combined with traditional neighbourhood aggregation.
  4. Logical correspondence – The authors construct GML(T), a modal logic where modalities are indexed by templates and graded quantifiers count how many template embeddings satisfy a property.
  5. Expressivity proof – Using a template‑aware bisimulation relation and a generalized WL refinement process, they prove that any function computable by a T‑GNN can be expressed in GML(T) and vice‑versa.
  6. Instantiations – They map several known GNN variants (e.g., Subgraph GNNs, Cycle‑Counting GNNs, k‑WL‑based GNNs) onto specific choices of 𝒯, showing the framework’s flexibility.

Results & Findings

  • Theoretical equivalence: The main theorem establishes a tight correspondence: T‑GNNs ⇔ GML(T) ⇔ template‑aware WL. This means any graph property distinguishable by the logic (or the WL variant) can be learned by a suitably designed T‑GNN.
  • Expressivity hierarchy: By varying the template set 𝒯, the authors recover known expressivity hierarchies (e.g., 1‑WL < 2‑WL < …) and demonstrate that adding richer templates strictly increases discriminative power.
  • Concrete examples: They illustrate that counting 4‑cycles—a task beyond standard 1‑WL—becomes expressible as soon as a 4‑cycle template is added to 𝒯.
  • Unification: All examined GNN architectures are shown to be special cases, confirming that the template view subsumes prior “enhanced‑aggregation” tricks.

Practical Implications

  • Modular design: Developers can now think of GNN architecture as a plug‑and‑play system: choose a library of templates that capture domain‑specific motifs (e.g., functional groups in chemistry, traffic patterns in road networks) and let the T‑GNN handle aggregation automatically.
  • Targeted expressivity: Instead of blindly scaling model depth or hidden size, practitioners can boost discriminative power by adding a few well‑chosen templates, often with lower computational overhead than higher‑order WL‑based GNNs.
  • Explainability: Since each template corresponds to an interpretable substructure, the learned weights on template aggregations can be inspected to understand why a model distinguishes two graphs.
  • Compatibility: Existing GNN libraries (PyTorch Geometric, DGL) can implement T‑GNNs by extending their message‑passing API to accept a list of subgraph matchers, making adoption straightforward.
  • Domain‑specific acceleration: In chemistry, adding templates for common rings (benzene, pyridine) can enable models to capture aromaticity without deep message‑passing, potentially reducing training time and improving generalization on small datasets.

Limitations & Future Work

  • Template enumeration cost: Exhaustively finding all embeddings of complex templates can be expensive for large graphs; the paper suggests approximate counting or sampling as mitigations but leaves efficient implementations as an open challenge.
  • Template selection: The framework does not prescribe how to choose an optimal template set for a given task; automated discovery or learning of useful templates remains unexplored.
  • Scalability to massive graphs: While the theory holds for any graph size, practical scalability to billions of edges (e.g., social networks) will require distributed subgraph matching techniques.
  • Empirical validation: The paper focuses on theoretical expressivity; extensive benchmark experiments comparing T‑GNNs to state‑of‑the‑art models on real‑world tasks are left for future work.

Bottom line: By framing GNN expressivity in terms of user‑defined graph templates and a matching logical language, the authors provide a powerful, extensible toolkit for developers who need more discriminative power without resorting to heavyweight higher‑order architectures. The next step is turning this elegant theory into scalable, plug‑and‑play libraries that let engineers inject domain knowledge directly into their graph models.

Authors

  • Huan Luo
  • Jonni Virtema

Paper Information

  • arXiv ID: 2602.18409v1
  • Categories: cs.LG, cs.AI, cs.LO
  • Published: February 20, 2026
  • PDF: Download PDF
0 views
Back to Blog

Related posts

Read more »