[Paper] Is a LOCAL algorithm computable?

Published: (February 24, 2026 at 10:43 AM EST)
5 min read
Source: arXiv

Source: arXiv - 2602.21022v1

Overview

The paper Is a LOCAL algorithm computable? examines a subtle but fundamental assumption in the classic LOCAL model of distributed computing: must each node’s update rule be a computable function, or can it be any arbitrary (possibly non‑computable) mapping? The authors show that this distinction dramatically changes the round‑complexity of even the most “well‑behaved” problems—locally checkable labelings (LCLs)—and that it is tightly linked to whether nodes have any knowledge of the network size (n).

Key Contributions

  • Identifies a hidden assumption in the standard LOCAL model concerning computability of node functions.
  • Constructs an explicit LCL problem (Π) that behaves differently under three scenarios:
    1. Uncomputable LOCAL model → solvable in (O(\log n)) rounds.
    2. Computable model with any upper bound on (n) → also solvable in (O(\log n)) rounds.
    3. Computable model with no knowledge of (n) → requires (\Omega(\sqrt{n})) rounds.
  • Proves a general equivalence: for any LCL, having any bound on (n) makes the computable and uncomputable models have identical asymptotic complexities.
  • Bridges two previously separate themes—computability theory and the role of global knowledge—in the analysis of distributed algorithms.

Methodology

  1. Formalizing the computability question – The authors extend the classic definition of the LOCAL model to explicitly distinguish between computable and uncomputable node transition functions.
  2. Designing a “hard” LCL – They craft a labeling problem that encodes a global property (essentially a “promise” about the size of the graph) which can be verified locally only if nodes can perform a non‑computable lookup or have a bound on (n).
  3. Complexity separation proofs
    • Upper bounds: They give constructive algorithms that run in (O(\log n)) rounds when either non‑computable functions are allowed or when a size bound is known, using classic techniques such as network decomposition and deterministic symmetry breaking.
    • Lower bound: They adapt a reduction from the classic “pointer‑chasing” and “graph‑size detection” problems to show that without any size information, any computable algorithm must spend (\Omega(\sqrt{n})) rounds to break symmetry enough to satisfy the LCL.
  4. Generalization – By abstracting the construction, they prove that the presence of any size bound collapses the gap between the two models for all LCLs.

Results & Findings

ScenarioKnowledge of (n)Allowed node functionsRound complexity for (Π)
Uncomputable LOCALNoneArbitrary (possibly non‑computable)(O(\log n))
Computable LOCALUpper bound on (n) (any)Computable only(O(\log n))
Computable LOCALNo boundComputable only(\Omega(\sqrt{n}))

The key takeaway is that computability and global knowledge are not independent: any non‑trivial bound on the network size eliminates the advantage that an uncomputable transition function could provide. Moreover, the authors show that this phenomenon is not limited to the crafted problem (Π); it holds for the entire class of LCLs.

Practical Implications

  • Algorithm design discipline – When implementing LOCAL‑style protocols (e.g., graph coloring, MIS, network decomposition) engineers should be explicit about the computability of any “oracle‑like” steps. Implicitly assuming access to non‑computable functions can hide unrealistic performance guarantees.
  • Importance of size hints – Even a loose upper bound on the number of participants (e.g., “the network has fewer than 10⁶ nodes”) can dramatically reduce the required communication rounds. In practice, this suggests that exchanging a small amount of meta‑information (like a size estimate) at startup is highly beneficial.
  • Security and robustness – The separation indicates that an adversary who can force nodes to rely on non‑computable behavior (e.g., by providing malformed inputs that require solving an undecidable problem) could degrade performance. Protocols should therefore validate that all local computations are effectively computable.
  • Tooling for distributed systems – Verification frameworks that model LOCAL algorithms need to incorporate the computability distinction to avoid unsound proofs of sub‑logarithmic runtimes.

Limitations & Future Work

  • Model specificity – The separation is proved for the deterministic LOCAL model; extending the analysis to randomized or CONGEST variants remains open.
  • Practical relevance of non‑computable functions – While theoretically illuminating, truly non‑computable functions cannot be implemented. Future work could explore resource‑bounded approximations (e.g., using heuristics) and their impact on round complexity.
  • Broader classes of problems – The paper focuses on LCLs. Investigating whether similar computability‑knowledge interactions appear for global optimization problems (e.g., distributed shortest paths) would deepen our understanding.

Bottom line: The paper uncovers a hidden axis—computability vs. knowledge of network size—that shapes the fundamental limits of distributed algorithms in the LOCAL model. For developers building fast, locality‑aware protocols, the practical lesson is clear: provide your nodes with even a rough sense of the network’s scale, and keep every local computation firmly within the realm of the computable. This simple step can bridge the gap between theoretically optimal logarithmic rounds and the much slower (\sqrt{n})‑scale runtimes that would otherwise arise.

Authors

  • Antonio Cruciani
  • Avinandan Das
  • Massimo Equi
  • Henrik Lievonen
  • Diep Luong-Le
  • Augusto Modanese
  • Jukka Suomela

Paper Information

  • arXiv ID: 2602.21022v1
  • Categories: cs.DC, cs.CC
  • Published: February 24, 2026
  • PDF: Download PDF
0 views
Back to Blog

Related posts

Read more »