[Paper] Statistical Guarantees for Reasoning Probes on Looped Boolean Circuits

Published: (February 3, 2026 at 02:39 PM EST)
4 min read
Source: arXiv

Source: arXiv - 2602.03970v1

Overview

The paper investigates how well a reasoning probe—a learning model that only sees a subset of nodes in a Boolean circuit—can infer the hidden logical gates that drive the circuit’s behavior. By modeling the circuit as a perfectly balanced ν‑ary tree with feedback loops, the authors prove that a graph‑convolutional network (GCN) can learn these hidden gates at the statistically optimal rate, regardless of the circuit’s size.

Key Contributions

  • Formal model of looped Boolean circuits: Represents iterative computation as a perfect ν‑ary tree whose output is fed back as input, capturing a stylized “looped reasoning” process.

  • Reasoning probe framework: Defines a probe that observes only a sampled subset of internal nodes and must output a probability distribution over a fixed set of admissible Boolean gates.

  • Optimal generalization bound: Shows that a GCN‑based hypothesis class achieves the minimax‑optimal error rate

    [ \mathcal{O}!\left(\frac{\sqrt{\log(2/\delta)}}{\sqrt{N}}\right) ]

    with probability (1-\delta) after querying (N) nodes.

  • Graph‑size‑independent guarantee: Leverages a low‑distortion one‑dimensional snowflake embedding of the circuit’s graph metric, proving that the bound does not deteriorate as the circuit grows.

  • Hybrid analytical toolkit: Combines snowflake metric embeddings (from geometric functional analysis) with statistical optimal transport techniques to handle the partial observability setting.

Methodology

  1. Circuit representation: The computation graph is a perfect ν‑ary tree (ν ≥ 2). After each round, the circuit’s output is appended to its input, creating a feedback loop that repeats the same tree structure.
  2. Partial observation model: A reasoning probe samples (N) internal nodes uniformly (or via any distribution) and receives the Boolean values computed at those nodes. The probe never sees the full tree.
  3. Hypothesis class: The probe is instantiated as a graph convolutional network that processes the sampled subgraph and outputs, for each queried node, a probability vector over the (m) admissible ν‑ary Boolean gates.
  4. Statistical analysis:
    • The authors embed the tree metric into a one‑dimensional snowflake metric with low distortion, showing that distances between nodes can be faithfully represented on a line.
    • Using this embedding, they reduce the learning problem to a transductive regression task on a low‑dimensional space.
    • They then apply tools from optimal transport to bound the discrepancy between the empirical distribution of observed nodes and the true underlying gate distribution, yielding the (\mathcal{O}(1/\sqrt{N})) rate.

Results & Findings

AspectWhat the paper shows
Generalization errorWith probability (1-\delta), the worst‑case error of the GCN probe is (\displaystyle \mathcal{O}!\left(\frac{\sqrt{\log(2/\delta)}}{\sqrt{N}}\right)).
Dependence on graph sizeThe bound is independent of the total number of nodes in the circuit, thanks to the snowflake embedding.
OptimalityThe derived rate matches known minimax lower bounds for transductive learning under partial observation, proving it cannot be improved in general.
Robustness to gate setThe result holds for any fixed collection of (m) admissible ν‑ary Boolean gates, making it agnostic to the specific logical functions used.

In plain terms, the more nodes the probe samples, the error shrinks like (1/\sqrt{N}), and this shrinkage is as fast as theoretically possible—even for gigantic circuits.

Practical Implications

  • Debugging and verification of hardware accelerators: Engineers can instrument only a small fraction of a large combinational‑logic block (e.g., FPGA or ASIC datapaths) and still reliably infer the underlying gate types, reducing test‑bench overhead.
  • Explainable AI for graph‑structured models: The reasoning‑probe paradigm mirrors scenarios where a model must explain decisions based on limited internal activations (e.g., probing hidden layers of GNNs). The optimal rate guarantees that a modest number of probes suffices for trustworthy explanations.
  • Security and reverse engineering: An adversary with limited probing capability (e.g., side‑channel measurements) can reconstruct the logical structure of a black‑box circuit efficiently, informing both attack strategies and defensive obfuscation techniques.
  • Iterative program synthesis: Loop‑like programs (e.g., recursive functions) can be modeled as looped Boolean circuits; the results suggest that sampling a few execution traces is enough to recover the underlying logical rules, accelerating synthesis pipelines.
  • Scalable monitoring of distributed systems: In large sensor networks where each node runs a simple Boolean decision rule, a central monitor can infer the rule set by sampling a small subset, enabling lightweight health‑checking.

Limitations & Future Work

  • Realizability assumption: The analysis assumes the true gate assignments belong to the hypothesis class (i.e., the data is perfectly realizable). Real‑world noise or gate mismatches could degrade performance.
  • Transductive setting: Guarantees are proved for the specific set of queried nodes; extending to a fully inductive scenario (generalizing to unseen nodes) remains open.
  • Tree‑structured restriction: While the perfect ν‑ary tree captures many recursive computations, many practical circuits have irregular topologies. Adapting the snowflake embedding technique to arbitrary graphs is a natural next step.
  • Empirical validation: The paper is primarily theoretical; experimental studies on actual hardware or synthetic benchmarks would help quantify constants hidden in the (\mathcal{O}(\cdot)) notation.

Overall, the work provides a rigorous foundation for “reasoning under partial observability” on structured Boolean systems, offering both theoretical insight and a roadmap for practical tools that can learn or audit complex logical architectures with minimal probing.

Authors

  • Anastasis Kratsios
  • Giulia Livieri
  • A. Martina Neuman

Paper Information

  • arXiv ID: 2602.03970v1
  • Categories: stat.ML, cs.LG, cs.NE, math.MG, math.ST
  • Published: February 3, 2026
  • PDF: Download PDF
Back to Blog

Related posts

Read more »