A non-decision protocol for human–AI systems with explicit stop conditions

Published: (January 5, 2026 at 11:42 AM EST)
1 min read
Source: Dev.to

Source: Dev.to

Overview

I’m sharing a technical note proposing a non-decision protocol for human–AI systems. The core idea is simple: AI systems should not decide. They should clarify, trace, and stop — explicitly.

Key Principles

  • Human responsibility is non‑transferable.
  • Explicit stop conditions are defined and enforced.
  • Traceability of AI outputs is required.
  • Prevention of decision delegation to automated systems.

Positioning

This work is intended as a structural safety layer. It is not a model, a policy, or a governance framework.

Reference

The full document is archived with a DOI on Zenodo:
https://doi.org/10.5281/zenodo.18100154

Call for Feedback

I’m interested in feedback from people working on:

  • AI safety
  • Human‑in‑the‑loop systems
  • Decision theory
  • Critical system design

This is not a product or a startup pitch—just a protocol‑level contribution.

Back to Blog

Related posts

Read more »