EIOC as a Detection Model: From Framework to Code

Published: (December 31, 2025 at 01:00 PM EST)
7 min read
Source: Dev.to

Source: Dev.to

What if emotional manipulation in UX had a runtime detector?

EIOC (Emotion–Information–Options–Context) started as an explanatory lens—a way to analyze why certain interfaces feel coercive and others feel aligned. But frameworks that only explain aren’t enough. We need frameworks that detect.

This post walks through building EIOC as a formal detection model: typed, executable, configurable, and auditable. At detection‑time you’re not philosophizing; you’re mapping a concrete interaction into a structured EIOCObservation, then running it through rules.


The Four Axes

Every interaction (screen, flow, message, nudge) can be scored across four axes:

AxisQuestion
E — EmotionWhat emotional state is being targeted or amplified?
I — IntentWhat is the system’s operative intent toward the user?
O — OutcomeWhat is the likely user outcome (short‑/long‑term)?
C — ContextWhat constraints and power asymmetries shape this moment?

For detection, each axis needs:

  • Dimensions – sub‑questions you can score
  • Scales – categorical tags mapped to numeric scores
  • Thresholds / Patterns – combinations that constitute “fearware”, “manipulative”, “aligned”, etc.

1. The EIOC Schema

Vocabulary

from dataclasses import dataclass
from enum import Enum
from typing import List, Optional, Dict, Any

class EmotionTarget(Enum):
    NEUTRAL = "neutral"
    FEAR = "fear"
    SCARCITY = "scarcity"
    GUILT = "guilt"
    SHAME = "shame"
    TRUST = "trust"
    CARE = "care"
    EMPOWERMENT = "empowerment"

class IntentType(Enum):
    SUPPORTIVE = "supportive"
    NEUTRAL = "neutral"
    COERCIVE = "coercive"
    EXPLOITATIVE = "exploitative"

class OutcomeType(Enum):
    USER_BENEFIT = "user_benefit"
    PLATFORM_BENEFIT = "platform_benefit"
    MUTUAL_BENEFIT = "mutual_benefit"
    USER_HARM = "user_harm"
    UNKNOWN = "unknown"

class ContextRisk(Enum):
    LOW = "low"
    MEDIUM = "medium"
    HIGH = "high"
    UNKNOWN = "unknown"

Observation Container

@dataclass
class EIOCObservation:
    """A structured interpretation of a user‑facing interaction."""

    # Identification
    interaction_id: str
    description: str

    # EIOC axes
    emotion_target: EmotionTarget
    intent_type: IntentType
    outcome_type: OutcomeType
    context_risk: ContextRisk

    # Audit trail
    evidence: Optional[Dict[str, Any]] = None
    tags: Optional[List[str]] = None

Possible extensions

  • emotion_intensity: int – scale from –2 (negative) to +2 (positive)
  • journey_stage: str – where in the user flow this occurs
  • user_segment: str – e.g., vulnerable users, new users, high‑risk context

2. Detection Logic: Turning EIOC into Rules

Think of EIOC detection as a tiny rule engine that:

  1. Normalizes an interaction into an EIOCObservation
  2. Applies a list of DetectionRule instances
  3. Returns classifications + rationales

2.1 Findings and Severity

class FindingSeverity(Enum):
    INFO = "info"
    WARNING = "warning"
    CRITICAL = "critical"

@dataclass
class DetectionFinding:
    """The output of a triggered rule."""
    rule_id: str
    severity: FindingSeverity
    label: str
    description: str
    recommendation: Optional[str] = None

2.2 Rule Interface

from abc import ABC, abstractmethod

class DetectionRule(ABC):
    """Base class for all EIOC detection rules."""
    rule_id: str
    label: str
    description: str

    @abstractmethod
    def evaluate(self, obs: EIOCObservation) -> Optional[DetectionFinding]:
        """Evaluate an observation. Return a finding if the rule triggers."""
        ...

2.3 Example: Fearware Coercion Rule

class FearwareCoercionRule(DetectionRule):
    """Detects fearware‑style manipulation patterns."""

    rule_id = "FW-001"
    label = "Fearware‑style coercion"
    description = (
        "Flags interactions that intentionally target fear/scarcity "
        "with coercive intent and non‑beneficial or unknown user outcomes, "
        "especially in high‑risk contexts."
    )

    def evaluate(self, obs: EIOCObservation) -> Optional[DetectionFinding]:
        fear_emotions = {
            EmotionTarget.FEAR,
            EmotionTarget.SCARCITY,
            EmotionTarget.GUILT,
            EmotionTarget.SHAME,
        }
        coercive_intents = {
            IntentType.COERCIVE,
            IntentType.EXPLOITATIVE,
        }
        harmful_outcomes = {
            OutcomeType.PLATFORM_BENEFIT,
            OutcomeType.USER_HARM,
            OutcomeType.UNKNOWN,
        }
        risky_contexts = {
            ContextRisk.MEDIUM,
            ContextRisk.HIGH,
        }

        if (
            obs.emotion_target in fear_emotions
            and obs.intent_type in coercive_intents
            and obs.outcome_type in harmful_outcomes
            and obs.context_risk in risky_contexts
        ):
            return DetectionFinding(
                rule_id=self.rule_id,
                severity=FindingSeverity.CRITICAL,
                label=self.label,
                description=(
                    "This interaction weaponizes fear/scarcity under coercive intent, "
                    "with unclear or harmful user outcomes in a medium/high‑risk context."
                ),
                recommendation=(
                    "Re‑architect this moment to remove fear‑based leverage "
                    "and restore user agency."
                ),
            )
        return None

Putting It All Together

def run_detection(obs: EIOCObservation, rules: List[DetectionRule]) -> List[DetectionFinding]:
    findings: List[DetectionFinding] = []
    for rule in rules:
        result = rule.evaluate(obs)
        if result:
            findings.append(result)
    return findings


# Example usage
obs = EIOCObservation(
    interaction_id="login‑warning-01",
    description="Login page shows ‘Your account will be locked in 5 minutes!’",
    emotion_target=EmotionTarget.FEAR,
    intent_type=IntentType.COERCIVE,
    outcome_type=OutcomeType.PLATFORM_BENEFIT,
    context_risk=ContextRisk.HIGH,
    evidence={"screenshot": "url/to/img"},
    tags=["login", "urgency"]
)

rules = [FearwareCoercionRule()]  # add more rule instances as needed
findings = run_detection(obs, rules)

for f in findings:
    print(f"{f.severity.value.upper()}: {f.label}{f.description}")

Running the snippet above would emit a CRITICAL finding for the illustrated “fearware” pattern, giving auditors a concrete, reproducible signal that the interaction should be revisited.


Takeaway

By turning the EIOC lens into a typed schema and a rule‑based engine, you move from explaining manipulation to detecting it at runtime. The model stays human‑readable, auditable, and extensible—exactly what a responsible UX governance process needs.


3. The Detector

Orchestrator

class EIOCDetector:
    """Orchestrates EIOC rule evaluation."""

    def __init__(self, rules: List[DetectionRule]):
        self.rules = rules

    def evaluate(self, obs: EIOCObservation) -> List[DetectionFinding]:
        """Run all rules against an observation."""
        findings: List[DetectionFinding] = []
        for rule in self.rules:
            finding = rule.evaluate(obs)
            if finding is not None:
                findings.append(finding)
        return findings

Usage Example

# Initialize detector with rules
rules = [
    FearwareCoercionRule(),
    # Add more rules here...
]
detector = EIOCDetector(rules=rules)

# Create an observation
obs = EIOCObservation(
    interaction_id="retention_flow_001",
    description=(
        "Account deletion flow shows: 'Your friends will lose access "
        "to your updates' with red, urgent styling."
    ),
    emotion_target=EmotionTarget.FEAR,
    intent_type=IntentType.COERCIVE,
    outcome_type=OutcomeType.PLATFORM_BENEFIT,
    context_risk=ContextRisk.HIGH,
    evidence={
        "screenshot": "s3://audits/retention_flow_001.png",
        "copy": "Your friends will lose access to your updates"
    },
    tags=["account_deletion", "retention_flow"]
)

# Run detection
findings = detector.evaluate(obs)

# Output results
for f in findings:
    print(f"{f.severity.value.upper()} [{f.rule_id}] {f.label}")
    print(f"  → {f.description}")
    if f.recommendation:
        print(f"  → Recommendation: {f.recommendation}")

Output

CRITICAL [FW-001] Fearware-style coercion
  → This interaction weaponizes fear/scarcity under coercive intent,
     with unclear or harmful user outcomes in a medium/high-risk context.
  → Recommendation: Re-architect this moment to remove fear-based leverage
     and restore user agency.

4. Making It Configurable

Non‑engineers (UX auditors, ethical reviewers, policy teams) need to tweak rules without editing Python. Externalize the rules to YAML.

4.1 Rule Configuration

# eioc_rules.yaml
rules:
  - id: FW-001
    label: "Fearware-style coercion"
    description: >
      Flags interactions that target fear/scarcity with coercive intent
      and non‑beneficial user outcomes in high‑risk contexts.
    severity: critical
    emotion_target_in: ["fear", "scarcity", "guilt", "shame"]
    intent_type_in: ["coercive", "exploitative"]
    outcome_type_in: ["platform_benefit", "user_harm", "unknown"]
    context_risk_in: ["medium", "high"]

  - id: FW-002
    label: "Ambiguous nudge"
    description: >
      Flags interactions with unclear intent and unknown outcomes.
    severity: warning
    intent_type_in: ["neutral", "coercive"]
    outcome_type_in: ["unknown"]
    context_risk_in: ["medium", "high"]

4.2 Generic Configurable Rule

import yaml
from dataclasses import dataclass
from typing import Dict, List, Optional

@dataclass
class ConfigurableRule(DetectionRule):
    """A rule defined by configuration rather than code."""

    rule_id: str
    label: str
    description: str
    severity: FindingSeverity
    criteria: Dict[str, List[str]]

    def evaluate(self, obs: EIOCObservation) -> Optional[DetectionFinding]:
        # Map observation to comparable strings
        obs_values = {
            "emotion_target": obs.emotion_target.value,
            "intent_type": obs.intent_type.value,
            "outcome_type": obs.outcome_type.value,
            "context_risk": obs.context_risk.value,
        }

        # Check all criteria
        for field, allowed_values in self.criteria.items():
            if field not in obs_values:
                continue
            if obs_values[field] not in allowed_values:
                return None

        return DetectionFinding(
            rule_id=self.rule_id,
            severity=self.severity,
            label=self.label,
            description=self.description,
        )

def load_rules_from_yaml(path: str) -> List[DetectionRule]:
    """Load detection rules from a YAML configuration file."""
    with open(path, "r") as f:
        data = yaml.safe_load(f)

    rules: List[DetectionRule] = []

    for r in data["rules"]:
        # Extract criteria fields (anything ending in _in)
        criteria = {
            k.replace("_in", ""): v
            for k, v in r.items()
            if k.endswith("_in")
        }

        rule = ConfigurableRule(
            rule_id=r["id"],
            label=r["label"],
            description=r["description"],
            severity=FindingSeverity(r["severity"]),
            criteria=criteria,
        )
        rules.append(rule)

    return rules

4.3 Loading and Running

# Load rules from config
rules = load_rules_from_yaml("eioc_rules.yaml")

# Initialize detector
detector = EIOCDetector(rules=rules)

# Now non‑engineers can add/modify rules via YAML
# without touching the detection engine

What This Enables

Use CaseHow
UX ReviewsScore interactions before launch
Design PRsAttach EIOC findings to review
DashboardsAggregate findings across product
AuditsEvidence‑backed compliance reports
CI/CD GatesBlock deploys with CRITICAL findings

The Bigger Picture

Fearware didn’t disappear—it evolved into dark patterns, manipulative nudges, and “growth hacks” that exploit the same emotional levers with better typography.

EIOC as a detection model gives us a way to:

  • Name the manipulation (schema)
  • Detect it systematically (rules)
  • Configure detection without code (YAML)
  • Audit with evidence (findings)
# Philosophy Becomes Infrastructure
## Framework Becomes Tool

**Related reading**

- [Fearware as the Anti‑Pattern of EIOC](#)
- [Designing Beyond Fearware](#)
- [The Echoes of Fearware in Modern UX](#)

*EIOC is part of ongoing research into emotional logic, informational integrity, and ethical system design.*
Back to Blog

Related posts

Read more »