[Paper] SEDULity: A Proof-of-Learning Framework for Distributed and Secure Blockchains with Efficient Useful Work

Published: (December 15, 2025 at 01:55 PM EST)
4 min read
Source: arXiv

Source: arXiv - 2512.13666v1

Overview

The paper introduces SEDULity, a new “Proof‑of‑Learning” (PoL) protocol that lets blockchain miners do useful machine‑learning training instead of wasteful hash‑puzzle crunching. By weaving the block‑template into the ML training loop and designing a verification‑friendly “useful function,” the authors claim they can keep the security guarantees of traditional Proof‑of‑Work (PoW) while dramatically cutting energy waste.

Key Contributions

  • SEDULity framework: a fully distributed PoL system that simultaneously secures the ledger and trains ML models.
  • Template‑encoded training: the block’s header data is embedded directly into the loss function, turning every mining attempt into a legitimate learning step.
  • Hard‑to‑solve, easy‑to‑verify useful function: a cryptographic‑friendly construction that makes cheating computationally expensive yet allows rapid verification by other nodes.
  • Incentive design & game‑theoretic analysis: shows that rational miners maximize profit by behaving honestly, given appropriately tuned rewards and penalties.
  • Extensibility: the core ideas can be adapted to other useful workloads beyond ML (e.g., scientific simulations, data‑cleaning tasks).
  • Empirical validation: simulation results demonstrate comparable block‑finalization latency to PoW while achieving measurable model‑training progress.

Methodology

  1. Block‑template encoding – The miner takes the current block’s metadata (previous hash, timestamp, transaction Merkle root, etc.) and injects it as a deterministic seed into the ML training process (e.g., as part of the initialization or as a regularization term). This guarantees that each candidate block corresponds to a unique training trajectory.

  2. Useful function design – Instead of solving a random hash puzzle, miners must minimize a useful loss that combines the standard ML objective (e.g., cross‑entropy) with a cryptographic hardness component. The loss is constructed so that:

    • Finding a low‑loss solution requires genuine computation (hard).
    • Verifying a claimed loss value is a simple arithmetic check (easy).
  3. Distributed consensus – After a miner finishes a training epoch and produces a candidate block, it broadcasts the model checkpoint and the claimed loss. Peers verify the loss, check that the block meets the usual PoW criteria (e.g., difficulty target), and then vote to adopt the block.

  4. Incentive mechanism – Rewards are split between a block reward (as in PoW) and a learning reward proportional to the improvement in the model’s performance. A penalty is imposed if a miner submits a falsified loss, enforced by a stake‑bond that can be slashed.

  5. Theoretical analysis – Using a game‑theoretic model, the authors prove that with proper parameter settings (reward ratio, difficulty, bond size), the Nash equilibrium is for miners to follow the honest training protocol.

  6. Simulation – Experiments on a synthetic dataset and a small CNN model compare SEDULity’s block time, energy consumption, and model accuracy against classic PoW and a naïve PoL baseline.

Results & Findings

MetricPoW (baseline)Naïve PoLSEDULity
Avg. block time~10 min~12 min (high variance)~10.5 min
Energy per block (J)1.2 GJ0.9 GJ0.6 GJ
Model accuracy after 1000 blocksN/A71 %78 %
Verification latency< 1 ms5 ms2 ms
  • Security: The probability of a successful double‑spend attack remains bounded by the same difficulty parameter as PoW because the useful function’s hardness mirrors the hash puzzle’s entropy.
  • Efficiency: Energy consumption drops by ~50 % relative to PoW while still meeting target block intervals.
  • Learning progress: The trained model converges to a respectable accuracy, showing that the “useful work” yields tangible ML benefits.

Practical Implications

  • Sustainable mining – Cloud providers, edge devices, or even IoT fleets could contribute to a blockchain while simultaneously training models for federated‑learning services, reducing the carbon footprint of both domains.
  • Monetizing idle compute – Enterprises with spare GPU cycles can earn crypto rewards by plugging into a SEDULity‑compatible network, turning underutilized hardware into revenue.
  • Domain‑specific blockchains – Projects that already need large‑scale ML (e.g., autonomous‑vehicle data aggregation, medical‑image labeling) can embed their training pipelines directly into consensus, aligning economic incentives with business goals.
  • Regulatory friendliness – Demonstrating that a public ledger performs socially beneficial computation may ease scrutiny from sustainability‑focused regulators and investors.

Limitations & Future Work

  • Model‑specificity – The current design assumes a relatively small, well‑behaved ML task; scaling to massive models (e.g., GPT‑scale) would require careful bandwidth and checkpoint‑size management.
  • Verification overhead – Although verification is cheap, it still adds a non‑zero latency that could become a bottleneck in high‑throughput networks.
  • Adversarial training attacks – The paper does not deeply explore poisoning or backdoor insertion via the learning process; robust defenses will be needed for safety‑critical applications.
  • Parameter tuning – Selecting the right balance between block reward, learning reward, and bond size is non‑trivial and may need dynamic adjustment mechanisms.

Future research directions include extending SEDULity to heterogeneous useful workloads (e.g., scientific simulations), integrating privacy‑preserving training (federated or encrypted), and building a real‑world testnet to study long‑term economic dynamics.

Authors

  • Weihang Cao
  • Mustafa Doger
  • Sennur Ulukus

Paper Information

  • arXiv ID: 2512.13666v1
  • Categories: cs.CR, cs.DC, cs.IT, cs.LG
  • Published: December 15, 2025
  • PDF: Download PDF
Back to Blog

Related posts

Read more »