[Paper] VS-Graph: Scalable and Efficient Graph Classification Using Hyperdimensional Computing
Source: arXiv - 2512.03394v1
Overview
The paper introduces VS‑Graph, a novel framework that brings the ultra‑lightweight world of Hyperdimensional Computing (HDC) to the demanding task of graph classification. By replacing heavyweight back‑propagation‑based Graph Neural Networks (GNNs) with a purely vector‑symbolic pipeline, the authors achieve competitive (and sometimes superior) accuracy while slashing training time by up to 450×, making graph learning feasible on edge devices and neuromorphic chips.
Key Contributions
- Spike Diffusion: a topology‑driven node‑identification scheme that encodes graph structure directly into high‑dimensional vectors.
- Associative Message Passing: a multi‑hop neighborhood aggregation performed entirely in the hypervector space, eliminating the need for gradient‑based updates.
- Performance Gains: matches or exceeds modern GNN baselines on benchmarks (e.g., MUTAG, DD) while being 4‑5 % more accurate than the previous HDC‑based method.
- Training Speedup: up to 450× faster than conventional GNN training because there is no back‑propagation.
- Dimensionality Robustness: retains high accuracy even when hypervectors are compressed to D = 128, opening the door to ultra‑compact hardware implementations.
Methodology
- Hypervector Encoding – Each node is assigned a random high‑dimensional binary (or bipolar) vector. The graph’s adjacency information is captured by “spiking” these vectors through the Spike Diffusion process, which propagates a node’s identifier across its neighbors in a way that respects the graph topology.
- Associative Message Passing – Instead of learning weight matrices, VS‑Graph repeatedly combines (via binding and bundling operations typical of Vector Symbolic Architectures) the spiked vectors of a node’s multi‑hop neighborhood. This yields a single graph‑level hypervector that aggregates structural and feature information.
- Classification – The final hypervector is compared against class prototypes using a simple similarity measure (e.g., cosine or Hamming distance). The class with the highest similarity wins. No gradient descent, loss functions, or epochs are required.
The whole pipeline is deterministic and can be expressed as a series of matrix‑vector multiplications and element‑wise operations, which map naturally onto CPUs, GPUs, and especially low‑power neuromorphic accelerators.
Results & Findings
| Dataset | VS‑Graph Accuracy | Best GNN Accuracy | Prior HDC Baseline |
|---|---|---|---|
| MUTAG | 92.1 % (±0.3) | 91.8 % | 87.0 % |
| DD | 78.4 % (±0.5) | 77.9 % | 73.2 % |
| PROTEINS | 73.2 % | 73.0 % | 68.5 % |
- Training time: VS‑Graph finishes in seconds where GNNs need minutes to hours (up to 450× speedup).
- Dimensionality test: Reducing hypervector size from D = 10 000 to D = 128 drops accuracy by < 1 % on most benchmarks, confirming robustness to aggressive compression.
- Memory footprint: The model stores only a handful of prototype vectors (one per class) plus the random seed for hypervector generation, resulting in < 1 MB memory usage for all tested datasets.
Practical Implications
- Edge & IoT deployments – Developers can now run graph‑based inference (e.g., molecular property checks, network anomaly detection) on microcontrollers or low‑power ASICs without the overhead of a full GNN stack.
- Rapid prototyping – Because there’s no training loop, data scientists can iterate on feature engineering or graph preprocessing in minutes rather than hours.
- Neuromorphic hardware compatibility – The binding/bundling operations map directly to spiking neural hardware, enabling future ultra‑energy‑efficient graph analytics.
- Cost‑effective scaling – Large‑scale graph datasets (e.g., social‑network subgraph classification) become tractable on commodity cloud instances, reducing compute bills.
Limitations & Future Work
- Expressiveness ceiling – While VS‑Graph closes the gap with GNNs, it still lags on highly heterophilic graphs where edge attributes play a dominant role.
- Static graphs only – The current formulation assumes a fixed graph structure; extending the method to dynamic or streaming graphs is left for future research.
- Limited interpretability – Hypervector similarity scores are less intuitive than attention weights in GNNs, making debugging harder.
- Hardware validation – The authors demonstrate software‑level speedups; real‑world benchmarks on actual edge or neuromorphic chips are a natural next step.
VS‑Graph shows that brain‑inspired hyperdimensional computing can finally compete with heavyweight deep learning on graph tasks, opening a practical path for developers who need fast, lightweight, and scalable graph classification.
Authors
- Hamed Poursiami
- Shay Snyder
- Guojing Cong
- Thomas Potok
- Maryam Parsa
Paper Information
- arXiv ID: 2512.03394v1
- Categories: cs.LG, cs.AI, cs.NE
- Published: December 3, 2025
- PDF: Download PDF