[Paper] Recursive querying of neural networks via weighted structures

Published: (January 6, 2026 at 12:30 PM EST)
4 min read
Source: arXiv

Source: arXiv - 2601.03201v1

Overview

The paper Recursive querying of neural networks via weighted structures explores how to query neural‑network models using declarative, logic‑based languages. By treating a trained network as a weighted structure—a graph whose nodes and edges carry numeric weights—the authors develop a recursive query framework that can express complex, model‑agnostic analyses while keeping the computational cost under control.

Key Contributions

  • Functional fixpoint logic for weighted structures – adapts the classic Grädel‑Gurevich fixpoint mechanism to Datalog‑style syntax that works on graphs with numeric weights.
  • Loose fixpoint semantics – introduces a variant where inductively defined weight functions may be overwritten, simplifying reasoning about recursive definitions.
  • Scalar restriction – defines a fragment of the logic with polynomial‑time data complexity, suitable for practical query evaluation.
  • Expressiveness results – shows the scalar fragment can capture all PTIME model‑agnostic queries on reduced neural networks whose weights are polynomially bounded.
  • Complexity lower bound – proves that even very simple model‑agnostic queries become NP‑complete when the scalar restriction is lifted.
  • Iterated transductions – studies how weighted structures can be transformed repeatedly (e.g., layer‑wise abstraction) while preserving queryability.

Methodology

  1. Weighted structures as a data model – A feed‑forward neural network is represented as a directed acyclic graph where each edge carries a real‑valued weight and each node may store activation values.
  2. Logic design – The authors start from functional fixpoint logic (FFL), which lets you define new functions recursively until a fixpoint is reached. They translate FFL into a Datalog‑like rule language, making it more familiar to database and programming language practitioners.
  3. Loose fixpoint variant – Instead of requiring monotone growth of function values, the “loose” version permits overwriting, which mirrors how back‑propagation updates weights. This simplifies the semantics and enables more natural recursive queries (e.g., “propagate a threshold through the network”).
  4. Scalar fragment – By restricting function arguments to scalar (single‑value) terms and limiting recursion depth to polynomial size, the authors obtain a fragment whose evaluation runs in polynomial time with respect to the network size.
  5. Complexity analysis – Using reductions from classic PTIME and NP‑complete problems, they map the expressive power of the logic to known complexity classes.
  6. Transduction pipelines – They model common preprocessing steps (pruning, quantization, layer merging) as transductions—logic‑based transformations that output a new weighted structure—allowing queries to be composed with these pipelines.

Results & Findings

AspectFinding
ExpressivenessThe scalar fragment can formulate any PTIME‑computable, model‑agnostic property on reduced networks (e.g., “does there exist a path whose cumulative weight exceeds k?”).
HardnessRemoving the scalar restriction makes even trivial‑looking queries NP‑complete, indicating a sharp boundary between tractable and intractable query classes.
Fixpoint semanticsThe loose fixpoint mechanism is equivalent in expressive power to the original functional fixpoint logic but easier to implement in a Datalog engine.
TransductionsIterated transductions preserve decidability: a query on the original network can be rewritten as a query on the transformed network without blowing up complexity.

In short, the work delineates a sweet spot: a recursive query language that is both powerful enough to capture useful analyses of neural networks and efficient enough for real‑world use.

Practical Implications

  • Model verification & safety – Engineers can write declarative checks (e.g., “no neuron in layer L has a weight sum > τ”) that are guaranteed to run in polynomial time, enabling automated audits of safety‑critical models.
  • Interpretability tooling – Recursive queries can extract hierarchical explanations (e.g., “trace all paths contributing > α to the output”) without hand‑crafting graph traversals.
  • Database‑style model serving – By treating a network as a weighted relation, existing Datalog or SQL engines can be extended to serve inference requests while simultaneously answering analytical queries.
  • Pipeline integration – The transduction framework lets developers compose preprocessing steps (quantization, pruning) as logical transformations, ensuring that downstream queries remain valid and efficiently evaluable.
  • Performance guarantees – The scalar fragment’s PTIME data complexity gives developers confidence that query evaluation will scale with model size, a crucial factor for large‑scale deployments.

Limitations & Future Work

  • Weight magnitude assumption – The PTIME expressiveness result relies on weights being polynomially bounded; extremely large or high‑precision weights may break the guarantees.
  • Model‑agnostic focus – The logic abstracts away from specific activation functions; extending the framework to capture model‑specific semantics (e.g., ReLU saturation) remains open.
  • Implementation prototype – The paper is primarily theoretical; a production‑grade Datalog engine supporting the loose fixpoint semantics for neural networks has yet to be built and benchmarked.
  • Scalability to deep networks – While recursion handles unbounded depth, practical evaluation on very deep architectures (hundreds of layers) may encounter memory overheads that need engineering solutions.

Future research directions include: integrating floating‑point arithmetic more tightly into the logic, developing optimized query planners for GPU‑accelerated inference, and exploring how the transduction approach can support continual learning scenarios where the network structure evolves over time.

Authors

  • Martin Grohe
  • Christoph Standke
  • Juno Steegmans
  • Jan Van den Bussche

Paper Information

  • arXiv ID: 2601.03201v1
  • Categories: cs.LO, cs.AI, cs.DB
  • Published: January 6, 2026
  • PDF: Download PDF
Back to Blog

Related posts

Read more »