A Guide to HITL, HOTL, and HOOTL Workflows
Source: Dev.to
Human-in-the-Loop (HITL)
In an HITL workflow, the AI acts as a sophisticated assistant that cannot finish its task without a human checkpoint.
When to use it
- High‑stakes legal or medical documents.
- Creative writing where “voice” and “nuance” are vital.
- Generating code for production systems.
Example
# HITL: Human reviews and approves/edits AI output before finalizing.
from google import genai
from dotenv import load_dotenv
load_dotenv(override=True)
client = genai.Client()
MODEL_NAME = "gemini-2.5-flash"
def hitl_press_release(topic):
"""Human‑in‑the‑Loop press‑release generator."""
prompt = f"Write a short press release for: {topic}"
ai_draft = client.models.generate_content(
model=MODEL_NAME,
contents=prompt
).text
print("\n--- [ACTION REQUIRED] REVIEW AI DRAFT ---")
print(ai_draft)
feedback = input("\nWould you like to (1) Accept, (2) Rewrite, or (3) Edit manually? ")
if feedback == "1":
final_output = ai_draft
elif feedback == "2":
critique = input("What should the AI change? ")
return hitl_press_release(f"{topic}. Note: {critique}")
else:
final_output = input("Paste your manually edited version here: ")
print("\n[SUCCESS] Press release finalized and saved.")
return final_output
hitl_press_release("Launch of a new sustainable coffee brand")
Human-on-the-Loop (HOTL)
In a HOTL workflow, the AI operates autonomously and at scale, while a human monitors a dashboard and intervenes only when the AI deviates from the goal.
When to use it
- Live social‑media moderation.
- Real‑time customer‑support chatbots.
- Monitoring industrial IoT sensors.
Example
# HOTL: Human monitors AI decisions in real‑time and can veto.
from google import genai
from dotenv import load_dotenv
import time
load_dotenv(override=True)
client = genai.Client()
MODEL_NAME = "gemini-2.5-flash"
def hotl_support_monitor(tickets):
print("System active. Monitoring AI actions... (Press Ctrl+C to PAUSE/VETO)")
for i, ticket in enumerate(tickets):
try:
response = client.models.generate_content(
model=MODEL_NAME,
contents=f"Categorize this ticket (Billing/Tech/Sales): {ticket}"
)
category = response.text.strip()
print(f"[Log {i+1}] Ticket: {ticket[:30]}... -> Action: Tagged as {category}")
time.sleep(2)
except KeyboardInterrupt:
print(f"\n[VETO] Human supervisor has paused the system on ticket: {ticket}")
action = input("Should we (C)ontinue or (S)kip this ticket? ")
if action.lower() == 's':
continue
tickets = ["My bill is too high", "The app keeps crashing", "How do I buy more?"]
hotl_support_monitor(tickets)
Human-out-of-the-Loop (HOOTL)
In a HOOTL workflow, the AI handles the entire process from start to finish. Human intervention occurs only after the fact (e.g., during an audit or weekly review). This is ideal for high‑volume, low‑risk tasks.
When to use it
- Spam filtering.
- Translating massive databases of product descriptions.
- Basic data cleaning and formatting.
Example
# HOOTL: AI processes batch independently; human reviews final report.
from google import genai
from dotenv import load_dotenv
load_dotenv(override=True)
client = genai.Client()
MODEL_NAME = "gemini-2.5-flash"
def hootl_batch_processor(data_list):
print(f"Starting HOOTL process: {len(data_list)} items to process.")
final_report = []
for item in data_list:
response = client.models.generate_content(
model=MODEL_NAME,
contents=f"Extract key sentiment (Happy/Sad/Neutral): {item}"
)
final_report.append({"data": item, "sentiment": response.text.strip()})
return final_report
reviews = ["Great food!", "Slow service", "Expensive but worth it"]
report = hootl_batch_processor(reviews)
print("Final Report:", report)
Comparison of Workflows
| Workflow | Interaction Level | Human Effort | Latency (Speed) | Risk Tolerance |
|---|---|---|---|---|
| HITL | Active | High | Slow | Zero Tolerance (High Risk) |
| HOTL | Passive | Medium | Medium | Managed Risk (Scale + Safety) |
| HOOTL | None | Low | Very Fast | Low Risk (High Volume) |