EIOC 作为检测模型:从框架到代码
Source: Dev.to
如果 UX 中的情感操控有运行时检测器会怎样?
EIOC(情感–信息–选项–情境)最初是一个解释性视角——用于分析为何某些界面让人感到被强迫,而其他界面让人感到契合。但仅能解释的框架是不够的。我们需要能够检测的框架。
本文将演示如何将 EIOC 构建为正式的检测模型:类型化、可执行、可配置且可审计。在检测时,你不再进行哲学思辨;而是将具体交互映射为结构化的 EIOCObservation,随后通过规则进行处理。
四个轴线
每一次交互(屏幕、流程、信息、提示)都可以在四个轴线上进行评分:
| 轴线 | 问题 |
|---|---|
| E — 情感 | 目标或放大的情感状态是什么? |
| I — 意图 | 系统对用户的操作意图是什么? |
| O — 结果 | 用户可能的结果是什么(短期/长期)? |
| C — 背景 | 哪些约束和权力不对称塑造了此时此刻? |
对于检测,每个轴线需要:
- 维度 – 可评分的子问题
- 尺度 – 与数值分数对应的分类标签
- 阈值/模式 – 组合形成“恐惧软件”“操纵性”“对齐”等
1. EIOC 架构
词汇表
from dataclasses import dataclass
from enum import Enum
from typing import List, Optional, Dict, Any
class EmotionTarget(Enum):
NEUTRAL = "neutral"
FEAR = "fear"
SCARCITY = "scarcity"
GUILT = "guilt"
SHAME = "shame"
TRUST = "trust"
CARE = "care"
EMPOWERMENT = "empowerment"
class IntentType(Enum):
SUPPORTIVE = "supportive"
NEUTRAL = "neutral"
COERCIVE = "coercive"
EXPLOITATIVE = "exploitative"
class OutcomeType(Enum):
USER_BENEFIT = "user_benefit"
PLATFORM_BENEFIT = "platform_benefit"
MUTUAL_BENEFIT = "mutual_benefit"
USER_HARM = "user_harm"
UNKNOWN = "unknown"
class ContextRisk(Enum):
LOW = "low"
MEDIUM = "medium"
HIGH = "high"
UNKNOWN = "unknown"
观察容器
@dataclass
class EIOCObservation:
"""对面向用户的交互进行结构化解释。"""
# Identification
interaction_id: str
description: str
# EIOC axes
emotion_target: EmotionTarget
intent_type: IntentType
outcome_type: OutcomeType
context_risk: ContextRisk
# Audit trail
evidence: Optional[Dict[str, Any]] = None
tags: Optional[List[str]] = None
可能的扩展
emotion_intensity: int– 取值范围为 –2(负面)到 +2(正面)journey_stage: str– 发生在用户流程的哪个阶段user_segment: str– 例如,易受影响的用户、新用户、高风险情境等
Source: …
2. 检测逻辑:将 EIOC 转化为规则
把 EIOC 检测想象成一个小型规则引擎,它:
- 将一次交互标准化为
EIOCObservation - 应用一系列
DetectionRule实例 - 返回分类结果 + 理由
2.1 发现与严重程度
class FindingSeverity(Enum):
INFO = "info"
WARNING = "warning"
CRITICAL = "critical"
@dataclass
class DetectionFinding:
"""The output of a triggered rule."""
rule_id: str
severity: FindingSeverity
label: str
description: str
recommendation: Optional[str] = None
2.2 规则接口
from abc import ABC, abstractmethod
class DetectionRule(ABC):
"""Base class for all EIOC detection rules."""
rule_id: str
label: str
description: str
@abstractmethod
def evaluate(self, obs: EIOCObservation) -> Optional[DetectionFinding]:
"""Evaluate an observation. Return a finding if the rule triggers."""
...
2.3 示例:恐惧软件强制规则
class FearwareCoercionRule(DetectionRule):
"""Detects fearware‑style manipulation patterns."""
rule_id = "FW-001"
label = "Fearware‑style coercion"
description = (
"Flags interactions that intentionally target fear/scarcity "
"with coercive intent and non‑beneficial or unknown user outcomes, "
"especially in high‑risk contexts."
)
def evaluate(self, obs: EIOCObservation) -> Optional[DetectionFinding]:
fear_emotions = {
EmotionTarget.FEAR,
EmotionTarget.SCARCITY,
EmotionTarget.GUILT,
EmotionTarget.SHAME,
}
coercive_intents = {
IntentType.COERCIVE,
IntentType.EXPLOITATIVE,
}
harmful_outcomes = {
OutcomeType.PLATFORM_BENEFIT,
OutcomeType.USER_HARM,
OutcomeType.UNKNOWN,
}
risky_contexts = {
ContextRisk.MEDIUM,
ContextRisk.HIGH,
}
if (
obs.emotion_target in fear_emotions
and obs.intent_type in coercive_intents
and obs.outcome_type in harmful_outcomes
and obs.context_risk in risky_contexts
):
return DetectionFinding(
rule_id=self.rule_id,
severity=FindingSeverity.CRITICAL,
label=self.label,
description=(
"This interaction weaponizes fear/scarcity under coercive intent, "
"with unclear or harmful user outcomes in a medium/high‑risk context."
),
recommendation=(
"Re‑architect this moment to remove fear‑based leverage "
"and restore user agency."
),
)
return None
综合示例
def run_detection(obs: EIOCObservation, rules: List[DetectionRule]) -> List[DetectionFinding]:
findings: List[DetectionFinding] = []
for rule in rules:
result = rule.evaluate(obs)
if result:
findings.append(result)
return findings
# Example usage
obs = EIOCObservation(
interaction_id="login‑warning-01",
description="Login page shows ‘Your account will be locked in 5 minutes!’",
emotion_target=EmotionTarget.FEAR,
intent_type=IntentType.COERCIVE,
outcome_type=OutcomeType.PLATFORM_BENEFIT,
context_risk=ContextRisk.HIGH,
evidence={"screenshot": "url/to/img"},
tags=["login", "urgency"]
)
rules = [FearwareCoercionRule()] # add more rule instances as needed
findings = run_detection(obs, rules)
for f in findings:
print(f"{f.severity.value.upper()}: {f.label} – {f.description}")
运行上述代码片段会对所示的“恐惧软件”模式产生一个 CRITICAL(严重)发现,为审计人员提供一个具体、可复现的信号,表明该交互需要重新审视。
要点
通过将 EIOC 镜头转化为类型化模式和基于规则的引擎,您可以从 explaining 操作转向在运行时 detecting 它。该模型保持可读、可审计且可扩展——正是负责任的 UX 治理流程所需的。
Source: …
3. 检测器
编排器
class EIOCDetector:
"""Orchestrates EIOC rule evaluation."""
def __init__(self, rules: List[DetectionRule]):
self.rules = rules
def evaluate(self, obs: EIOCObservation) -> List[DetectionFinding]:
"""Run all rules against an observation."""
findings: List[DetectionFinding] = []
for rule in self.rules:
finding = rule.evaluate(obs)
if finding is not None:
findings.append(finding)
return findings
使用示例
# Initialize detector with rules
rules = [
FearwareCoercionRule(),
# Add more rules here...
]
detector = EIOCDetector(rules=rules)
# Create an observation
obs = EIOCObservation(
interaction_id="retention_flow_001",
description=(
"Account deletion flow shows: 'Your friends will lose access "
"to your updates' with red, urgent styling."
),
emotion_target=EmotionTarget.FEAR,
intent_type=IntentType.COERCIVE,
outcome_type=OutcomeType.PLATFORM_BENEFIT,
context_risk=ContextRisk.HIGH,
evidence={
"screenshot": "s3://audits/retention_flow_001.png",
"copy": "Your friends will lose access to your updates"
},
tags=["account_deletion", "retention_flow"]
)
# Run detection
findings = detector.evaluate(obs)
# Output results
for f in findings:
print(f"{f.severity.value.upper()} [{f.rule_id}] {f.label}")
print(f" → {f.description}")
if f.recommendation:
print(f" → Recommendation: {f.recommendation}")
输出
CRITICAL [FW-001] Fearware-style coercion
→ This interaction weaponizes fear/scarcity under coercive intent,
with unclear or harmful user outcomes in a medium/high-risk context.
→ Recommendation: Re-architect this moment to remove fear-based leverage
and restore user agency.
4. 使其可配置
非工程师(UX 审计员、伦理审查员、政策团队)需要在不编辑 Python 代码的情况下微调规则。将规则外部化为 YAML。
4.1 规则配置
# eioc_rules.yaml
rules:
- id: FW-001
label: "Fearware-style coercion"
description: >
Flags interactions that target fear/scarcity with coercive intent
and non‑beneficial user outcomes in high‑risk contexts.
severity: critical
emotion_target_in: ["fear", "scarcity", "guilt", "shame"]
intent_type_in: ["coercive", "exploitative"]
outcome_type_in: ["platform_benefit", "user_harm", "unknown"]
context_risk_in: ["medium", "high"]
- id: FW-002
label: "Ambiguous nudge"
description: >
Flags interactions with unclear intent and unknown outcomes.
severity: warning
intent_type_in: ["neutral", "coercive"]
outcome_type_in: ["unknown"]
context_risk_in: ["medium", "high"]
4.2 通用可配置规则
import yaml
from dataclasses import dataclass
from typing import Dict, List, Optional
@dataclass
class ConfigurableRule(DetectionRule):
"""A rule defined by configuration rather than code."""
rule_id: str
label: str
description: str
severity: FindingSeverity
criteria: Dict[str, List[str]]
def evaluate(self, obs: EIOCObservation) -> Optional[DetectionFinding]:
# Map observation to comparable strings
obs_values = {
"emotion_target": obs.emotion_target.value,
"intent_type": obs.intent_type.value,
"outcome_type": obs.outcome_type.value,
"context_risk": obs.context_risk.value,
}
# Check all criteria
for field, allowed_values in self.criteria.items():
if field not in obs_values:
continue
if obs_values[field] not in allowed_values:
return None
return DetectionFinding(
rule_id=self.rule_id,
severity=self.severity,
label=self.label,
description=self.description,
)
def load_rules_from_yaml(path: str) -> List[DetectionRule]:
"""Load detection rules from a YAML configuration file."""
with open(path, "r") as f:
data = yaml.safe_load(f)
rules: List[DetectionRule] = []
for r in data["rules"]:
# Extract criteria fields (anything ending in _in)
criteria = {
k.replace("_in", ""): v
for k, v in r.items()
if k.endswith("_in")
}
rule = ConfigurableRule(
rule_id=r["id"],
label=r["label"],
description=r["description"],
severity=FindingSeverity(r["severity"]),
criteria=criteria,
)
rules.append(rule)
return rules
4.3 加载与运行
# Load rules from config
rules = load_rules_from_yaml("eioc_rules.yaml")
# Initialize detector
detector = EIOCDetector(rules=rules)
# Now non‑engineers can add/modify rules via YAML
# without touching the detection engine
此功能的用途
| 用例 | 方法 |
|---|---|
| UX 评审 | 在发布前对交互进行评分 |
| 设计 PR | 将 EIOC 发现附加到评审 |
| 仪表盘 | 汇总产品的发现 |
| 审计 | 基于证据的合规报告 |
| CI/CD 门 | 使用 CRITICAL 发现阻止部署 |
更大的图景
Fearware 并未消失——它演变成了暗黑模式、操纵性提示以及利用相同情感杠杆、但排版更佳的“增长黑客”。
EIOC 作为检测模型为我们提供了一种方法来:
- 命名 操纵(schema)
- 检测 它(rules)
- 配置 检测,无需代码(YAML)
- 审计 并提供证据(findings)
# Philosophy Becomes Infrastructure
框架成为工具
相关阅读
EIOC 是情感逻辑、信息完整性和伦理系统设计持续研究的一部分。