The Ethics of Simulation: How to Test Trauma-Informed Features Without Exploiting Real Pain

Published: (December 12, 2025 at 09:00 AM EST)
5 min read
Source: Dev.to

Source: Dev.to

Overview

Part of the CrisisCore Build Log – when the testing strategy becomes a moral question

Where does your test data come from?

For most applications, nobody cares. Mock users. Fake addresses. Random strings.

For a pain tracker? The test data is descriptions of suffering. And that raises questions I didn’t expect when I started this project:

  • Is it ethical to generate realistic crisis scenarios?
  • Who gets to write test cases about suicidal ideation?
  • How do you test a trauma‑informed system without retraumatizing your own team?

This post is my attempt to think through those questions honestly.

The Uncomfortable Reality

To test a pain tracker properly, I need test data that looks like this:

const sampleMoodEntries: MoodEntry[] = [
  {
    mood: 2,
    energy: 1,
    anxiety: 9,
    context: 'Severe pain flare-up, emergency room visit',
    triggers: ['acute pain', 'medical emergency', 'work absence'],
    notes: 'Overwhelmed by sudden pain onset. Anxious about work and recovery.',
  },
  // ...
];

That’s someone’s worst day, encoded in TypeScript. The more realistic the test data, the better my crisis detection works. But realism has a cost. Every time a developer opens that fixture file, they’re reading a description of suffering.

Where’s the line between testing rigor and trauma exploitation?

Principle 1: Synthetic Data Should Be Fictional, Not Extracted

The Extraction Problem

Real pain journals contain specific, identifiable details:

  • “My boss yelled at me after the third sick day”
  • “The new medication made me throw up during my daughter’s recital”
  • “I’m scared my wife will leave”

Even “anonymized,” these remain someone’s lived experience. Using them in tests means:

  • Developers read private moments repeatedly
  • Data could be reconstructed from patterns
  • The person never consented to their worst days becoming test fixtures

The Synthesis Approach

Instead, I generate fictional‑but‑plausible data:

/**
 * SYNTHETIC DATA GENERATION
 *
 * These entries are FICTIONAL. They represent patterns, not people.
 * No real person's pain journal was used to create these fixtures.
 */
export function generateSyntheticMoodEntry(
  scenario: 'crisis' | 'recovery' | 'stable' | 'declining'
): MoodEntry {
  const patterns = {
    crisis: {
      moodRange: [1, 3],
      anxietyRange: [7, 10],
      contextTemplates: [
        'Unexpected pain flare',
        'Sleep disruption for multiple days',
        'Medication change with difficult adjustment',
      ],
      triggerPool: ['pain spike', 'sleep loss', 'isolation', 'work stress'],
    },
    // ... other scenarios
  };

  const pattern = patterns[scenario];

  return {
    id: generateId(),
    timestamp: generateTimestamp(),
    mood: randomInRange(pattern.moodRange),
    anxiety: randomInRange(pattern.anxietyRange),
    context: randomChoice(pattern.contextTemplates),
    triggers: randomSubset(pattern.triggerPool, 2, 4),
    // Notes are generic, never mimicking real journal entries
    notes: generateGenericNote(scenario),
  };
}

The data is realistic enough to test patterns, but it doesn’t represent any real person’s experience.

Principle 2: Consent‑First Testing with Lived Experience

Synthetic data tests the code. But does the code actually help real people? For that, you need human testers. Recruiting people with chronic pain to test a pain tracker requires serious ethical consideration.

What I Don’t Do

  • ❌ “Could you log some real entries so I can see how the app handles them?” – extracts labor and data without proper compensation or consent.
  • ❌ Recruiting from pain‑support communities without disclosure – people in support groups are there for support, not to be research subjects.
  • ❌ Offering “free premium access” as the only compensation – that’s a discount, not genuine compensation for emotional labor.

What I Do

PARTICIPANT INFORMATION SHEET

What we're asking:
- Use the app during normal daily activities
- Share feedback on whether the crisis features felt helpful
- Optionally: describe moments where the app did/didn't meet your needs

What we're NOT asking:
- Access to your actual pain data
- Details of your medical history
- Any information you don't want to share

You can withdraw at any time without explanation.

Compensation that values emotional labor

Testing a trauma‑informed app may confront participants’ own trauma. My minimum compensation rate for lived‑experience testing is what I’d pay for generic UX testing.

Exit ramps and support resources

Every testing session includes:

  • A clear way to pause or stop
  • Crisis resources (e.g., 988 in the US, 9‑8‑8 in Canada) visible throughout
  • A debrief where participants can process the experience
  • Follow‑up check‑in 24–48 hours later

Veto power over findings

If a participant later regrets sharing something, they can request its removal from any analysis or documentation.

Principle 3: Trigger Warnings in Test Suites

Developers have trauma too. When I open a test file that contains crisis scenarios, I’m reading descriptions of distress repeatedly. If a developer on my team has personal experience with chronic pain, suicidal ideation, or medical trauma, those test files aren’t neutral.

Test Environment Content Warnings

/**
 * @fileoverview Mood and crisis test fixtures
 *
 * ⚠️ CONTENT WARNING: This file contains synthetic test data
 * representing crisis states, including:
 * - High anxiety/distress scenarios
 * - Pain flare simulations
 * - Low mood/hopelessness patterns
 *
 * These are FICTIONAL and generated from patterns, not real data.
 * If you need to step away, the test suite will run without modification.
 *
 * Crisis resources: 988 (US), 9‑8‑8 (Canada)
 */

Separating Sensitive Tests

src/test/
├── fixtures/
│   ├── pain-entries.ts          # Basic pain data
│   ├── mood-entries.ts          # Mood tracking data
│   └── crisis-scenarios.ts      # ⚠️ Crisis state simulations
├── __tests__/
│   ├── analytics/               # Can run without crisis data
│   ├── export/                  # Can run without crisis data
│   └── crisis/                  # Requires crisis fixtures
│       └── README.md            # Content warning + rationale

Test commands allow selective execution:

{
  "scripts": {
    "test": "vitest",
    "test:no-crisis": "vitest --exclude='**/crisis/**'",
    "test:crisis-only": "vitest crisis/"
  }
}

A developer having a hard day can run npm run test:no-crisis and still validate their work.

Principle 4: The Representation vs. Exploitation Line

When does test data cross from “representing pain patterns” to “exploiting suffering for engineering purposes”? I don’t have a perfect answer, but I use a framework of questions before creating any new fixture:

(Content truncated in the original source.)

Back to Blog

Related posts

Read more »