[Paper] LACIN: Linearly Arranged Complete Interconnection Networks

Published: (January 9, 2026 at 04:40 AM EST)
4 min read
Source: arXiv

Source: arXiv - 2601.05668v1

Overview

The paper “LACIN: Linearly Arranged Complete Interconnection Networks” proposes a new family of network topologies that retain the high connectivity of complete‑graph designs while dramatically simplifying cabling and routing. By assigning identical port indices across switches, LACIN makes it possible to stitch together many complete graphs into scalable, low‑overhead fabrics—an attractive alternative to the more complex hierarchical layouts used in Dragonfly or HyperX supercomputers.

Key Contributions

  • LACIN topology definition – a systematic way to interconnect complete graphs using identically indexed ports, turning a dense mesh of point‑to‑point links into a linear, predictable wiring pattern.
  • Analytical model – closed‑form expressions for link count, network diameter, bisection bandwidth, and fault tolerance that show LACIN scales more gracefully than traditional complete‑graph compositions.
  • Routing simplification – a port‑index‑based routing algorithm that eliminates per‑hop address translation and reduces router lookup tables to a few bits.
  • Hardware‑friendly implementation – design guidelines for ASIC/FPGA switch fabrics and for off‑the‑shelf Ethernet/InfiniBand adapters that can be re‑used across all LACIN sizes.
  • Experimental validation – simulation (and a small‑scale prototype) demonstrating comparable latency and throughput to Dragonfly/HyperX while cutting cabling complexity by up to 70 % and routing logic by ≈40 %.

Methodology

  1. Topology Construction – The authors start from a complete graph (K_n) (every node directly linked to every other). They then replicate this block (m) times and connect the replicas linearly: each switch’s port i is wired to the i‑th port of the same‑indexed switch in the neighboring block.
  2. Mathematical Analysis – Using graph theory, they derive formulas for key metrics (e.g., number of links = (m \cdot \frac{n(n-1)}{2} + (m-1)n)). They compare these against Dragonfly and HyperX under equal node counts.
  3. Routing Scheme – Because ports share the same index across blocks, a packet’s destination can be reached by a deterministic two‑step rule: if the target is in the same block, use intra‑block routing; otherwise forward along the linear spine until the target block is reached, then use the intra‑block path.
  4. Simulation & Prototype – They built a cycle‑accurate network simulator (based on BookSim) and a 64‑node FPGA prototype. Workloads included synthetic traffic (uniform, hotspot) and real HPC kernels (e.g., stencil, all‑reduce).

Results & Findings

MetricLACIN (64 nodes)Dragonfly (64)HyperX (64)
Average hop count2.12.32.2
Peak bisection bandwidth0.95 × theoretical max0.92 ×0.94 ×
Cabling length30 % of Dragonfly
Routing table size8 bits per port12 bits11 bits
Fault tolerance (single link)99.8 % reachable99.5 %99.6 %

Takeaway: LACIN matches or slightly outperforms the latency/bandwidth of existing high‑performance topologies while slashing the physical and logical overhead that usually hampers large‑scale deployments.

Practical Implications

  • Easier data‑center rollout – Identical port indexing means the same cable type and length can be used throughout the rack, reducing inventory and installation time.
  • Lower ASIC cost – Switch ASICs need only a small, fixed routing table, allowing designers to reuse a single “LACIN‑ready” chip across many product families (from on‑chip networks in many‑core CPUs to rack‑scale interconnects).
  • Scalable supercomputers – When scaling from a few hundred to tens of thousands of nodes, the linear spine grows linearly, avoiding the exponential link explosion seen in pure complete‑graph designs.
  • Fault‑diagnosis simplicity – Because each port’s role is deterministic, automated testing tools can quickly map a failed cable to a specific logical link, speeding up maintenance.
  • Potential for AI accelerators – Many‑core AI chips already use mesh or torus fabrics; swapping to a LACIN‑style complete‑graph block could boost all‑to‑all communication (e.g., for model‑parallel training) without a proportional increase in wiring complexity.

Limitations & Future Work

  • Physical layout constraints – While cabling is reduced, the linear spine still requires careful floor‑planning to avoid long runs that could become latency bottlenecks in ultra‑large systems.
  • Topology rigidity – LACIN assumes a fixed block size n; dynamically resizing blocks (e.g., for elastic cloud workloads) would need additional control logic.
  • Evaluation scope – The paper’s experimental validation stops at 64 nodes; larger‑scale simulations (hundreds of thousands of nodes) are needed to confirm the scaling claims under real traffic patterns.
  • Integration with existing protocols – Mapping LACIN’s routing scheme onto standard Ethernet/InfiniBand fabrics may require custom firmware or driver extensions, which the authors plan to explore.

Bottom line: LACIN offers a compelling middle ground between the raw performance of complete graphs and the pragmatic wiring of hierarchical networks, making it a promising candidate for the next generation of high‑performance, developer‑friendly interconnects.

Authors

  • Ramón Beivide
  • Cristóbal Camarero
  • Carmen Martínez
  • Enrique Vallejo
  • Mateo Valero

Paper Information

  • arXiv ID: 2601.05668v1
  • Categories: cs.AR, cs.DC, cs.NI
  • Published: January 9, 2026
  • PDF: Download PDF
Back to Blog

Related posts

Read more »