Glin Profanity: A Practical Toolkit for Content Moderation

Published: (December 30, 2025 at 06:48 PM EST)
6 min read
Source: Dev.to

Source: Dev.to

What is Glin‑Profanity?

Glin‑Profanity is an open‑source content‑moderation library for JavaScript/TypeScript and Python.
Unlike basic word‑list filters, it tackles the evasion techniques users actually try:

  • Leetspeak substitutions – e.g., f4ck, 5h1t
  • Unicode homoglyphs – Cyrillic characters that look like Latin letters
  • Character‑separation tricks

Key capabilities

  • Leetspeak & Unicode normalization (catches @$$, fսck, sh!t)
  • 23 language dictionaries built‑in
  • Optional ML toxicity detection via TensorFlow.js / TensorFlow
  • 21 M+ operations / sec with LRU caching
  • Works in Node.js, browsers, and Python

Try It Live

Test the filter directly in your browser — no installation required.

Glin‑Profanity Live Demo

Open Interactive Demo

Quick Reference

FeatureJavaScript / TypeScriptPython
Installnpm install glin-profanitypip install glin-profanity
Languages23 supported23 supported
Performance21 M ops / secNative C extension
ML SupportTensorFlow.jsTensorFlow
Bundle Size~45 KB (tree‑shakeable)N/A

Installation

JavaScript / TypeScript

npm install glin-profanity

Python

pip install glin-profanity

Optional ML toxicity support (JavaScript only)

npm install glin-profanity @tensorflow/tfjs

Code Templates

Template 1 – Basic Profanity Check

JavaScript

import { checkProfanity } from 'glin-profanity';

const result = checkProfanity('user input here', {
  languages: ['english']
});

if (result.containsProfanity) {
  console.log('Blocked words:', result.profaneWords);
}

Python

from glin_profanity import Filter

filter = Filter({"languages": ["english"]})
result = filter.check_profanity("user input here")

if result.contains_profanity:
    print(f"Blocked words: {result.profane_words}")

Template 2 – Leetspeak & Unicode Evasion Detection

Catches: f4ck, 5h1t, @$$, fսck (Cyrillic), s.h" "i.t

import { Filter } from 'glin-profanity';

const filter = new Filter({
  detectLeetspeak: true,
  leetspeakLevel: 'aggressive', // 'basic' | 'moderate' | 'aggressive'
  normalizeUnicode: true
});

filter.isProfane('f4ck');   // true
filter.isProfane('5h1t');   // true
filter.isProfane('@$$');   // true
filter.isProfane('fսck');   // true (Cyrillic 'ս')

Leetspeak Levels

LevelDescription
basicCommon substitutions (4→a, 3→e, 1→i, 0→o)
moderate+ Extended symbols (@→a, $→s, !→i)
aggressive+ Separated characters, mixed patterns

Template 3 – Multi‑Language Detection

// Detect a specific set of languages
const filter = new Filter({
  languages: ['english', 'spanish', 'french', 'german']
});

// Detect all supported languages
const filterAll = new Filter({ allLanguages: true });

Supported Languages

arabic, chinese, czech, danish, dutch, english, esperanto,
finnish, french, german, hindi, hungarian, italian, japanese,
korean, norwegian, persian, polish, portuguese, russian,
spanish, swedish, thai, turkish

Template 4 – Auto‑Replace Profanity

const filter = new Filter({
  replaceWith: '***',
  detectLeetspeak: true
});

const result = filter.checkProfanity('What the f4ck');
console.log(result.processedText); // "What the ***"

Custom replacement patterns

// Asterisks matching word length
{ replaceWith: '*' }        // "f**k" → "****"

// Fixed replacement
{ replaceWith: '[FILTERED]' } // "f**k" → "[FILTERED]"

// Character‑based
{ replaceWith: '#' }        // "f**k" → "####"

Template 5 – Severity‑Based Moderation

import { Filter, SeverityLevel } from 'glin-profanity';

const filter = new Filter({ detectLeetspeak: true });
const result = filter.checkProfanity(userInput);

switch (result.maxSeverity) {
  case SeverityLevel.HIGH:
    blockMessage(result);
    notifyModerators(result);
    break;
  case SeverityLevel.MEDIUM:
    sendFiltered(result.processedText);
    flagForReview(result);
    break;
  case SeverityLevel.LOW:
    sendFiltered(result.processedText);
    break;
  default:
    send(userInput);
}

Template 6 – React Hook for Real‑Time Input

import { useProfanityChecker } from 'glin-profanity';

function ChatInput() {
  const { result, checkText, isChecking } = useProfanityChecker({
    detectLeetspeak: true,
    languages: ['english']
  });

  return (
    <div>
      <input
        type="text"
        onChange={e => checkText(e.target.value)}
        placeholder="Type a message..."
        disabled={isChecking}
      />
      {result?.containsProfanity && (
        <p style={{ color: 'red' }}>
          Please remove inappropriate language.
        </p>
      )}
    </div>
  );
}

Template 7 – ML Toxicity Detection (v3+)

Catches toxic content without explicit profanity, e.g.:

  • “You’re the worst player ever”
  • “Nobody wants you here”
  • “Just quit already”
import { loadToxicityModel, checkToxicity } from 'glin-profanity/ml';

// Load once on app startup
await loadToxicityModel({ threshold: 0.9 });

// Check any text
const result = await checkToxicity("You're terrible at this");
if (result.isToxic) {
  console.log('Toxic content detected');
}

Example Output

console.log(result);
// {
//   toxic: true,
//   categories: {
//     toxicity: 0.92,
//     insult: 0.87,
//     threat: 0.12,
//     identity_attack: 0.08,
//     obscene: 0.45
//   }
// }

Note: The ML model runs 100 % locally. No API calls, no data leaves your server.

Template 8 – Full Chat Moderation Pipeline

import { Filter, SeverityLevel } from 'glin-profanity';
import { loadToxicityModel, checkToxicity } from 'glin-profanity/ml';

// Setup
const filter = new Filter({
  languages: ['english', 'spanish'],
  detectLeetspeak: true,
  leetspeakLevel: 'moderate',
  normalizeUnicode: true,
  replaceWith: '***',
});

await loadToxicityModel({ threshold: 0.85 });

// Moderation function
async function moderateMessage(text) {
  // 1️⃣ Fast rule‑based check
  const profanity = filter.checkProfanity(text);

  // 2️⃣ ML toxicity check
  const toxicity = await checkToxicity(text);

  // 3️⃣ Decision logic
  if (profanity.maxSeverity === SeverityLevel.HIGH) {
    return { action: 'block', reason: 'severe_profanity' };
  }

  if (toxicity.toxic) {
    return {
      action: 'flag',
      text: profanity.processedText,
      reason: 'toxic_content',
    };
  }

  if (profanity.containsProfanity) {
    return { action: 'filter', text: profanity.processedText };
  }

  return { action: 'allow', text };
}

// Usage
const result = await moderateMessage('User message here');

Template 9 – Express.js Middleware

import { Filter } from 'glin-profanity';
import express from 'express';
import { commentHandler } from './handlers/commentHandler.js';
import { getNestedValue } from './utils/getNestedValue.js';

const app = express();

const filter = new Filter({
  detectLeetspeak: true,
  languages: ['english'],
});

function profanityMiddleware(req, res, next) {
  // Fields (dot‑notation) that should be scanned for profanity
  const fieldsToCheck = ['body.message', 'body.comment', 'body.bio'];

  for (const field of fieldsToCheck) {
    const value = getNestedValue(req, field);
    if (value && filter.isProfane(value)) {
      return res.status(400).json({
        error: 'Content contains inappropriate language',
      });
    }
  }

  next();
}

// Route example
app.post('/api/comments', profanityMiddleware, commentHandler);

How it works

  1. Initialize the profanity filterglin-profanity is configured to detect leetspeak and to use the English dictionary.
  2. Define the middlewareprofanityMiddleware iterates over the list of fields that may contain user‑generated text.
  3. Extract nested valuesgetNestedValue(req, field) safely reads a value from a dot‑notation path (e.g., req.body.message).
  4. Check for profanity – If any field contains profane content, the request is rejected with a 400 Bad Request response.
  5. Proceed when clean – If no profanity is found, next() passes control to the next handler (commentHandler in the example).

Adding the middleware to other routes

app.put('/api/profile', profanityMiddleware, profileUpdateHandler);
app.post('/api/posts', profanityMiddleware, postCreateHandler);

Utility: getNestedValue

// utils/getNestedValue.js
export function getNestedValue(obj, path) {
  return path.split('.').reduce((acc, key) => (acc ? acc[key] : undefined), obj);
}

Template 10 – Custom Whitelist / Blacklist

const filter = new Filter({
  languages: ['english'],
  ignoreWords: ['hell', 'damn'],   // Allow these words
  customWords: ['badword', 'toxic'] // Add custom blocked words
});

Architecture

Glin Profanity Processing Flow

Performance Benchmarks

OperationSpeed
Simple check21 M ops/sec
With leetspeak (moderate)8.5 M ops/sec
Multi‑language (3 langs)18 M ops/sec
Unicode normalization15 M ops/sec

Results are cached using an LRU strategy.

API Quick Reference

Filter Options

interface FilterOptions {
  languages?: string[];               // ['english', 'spanish', …]
  allLanguages?: boolean;              // Check all 23 languages
  detectLeetspeak?: boolean;           // Enable leetspeak detection
  leetspeakLevel?: 'basic' | 'moderate' | 'aggressive';
  normalizeUnicode?: boolean;           // Handle Unicode homoglyphs
  replaceWith?: string;                 // Replacement character/string
  ignoreWords?: string[];              // Whitelist
  customWords?: string[];              // Additional blocked words
}

Result Object

interface CheckResult {
  containsProfanity: boolean;
  profaneWords: string[];
  processedText: string;   // Text after replacements are applied
  maxSeverity: SeverityLevel;
  matches: MatchDetail[];
}

Resources

  • Live DemoLink
  • GitHub RepositoryLink
  • npm PackageLink
  • PyPI PackageLink
  • Full DocumentationLink

Tags: javascript, typescript, python, react, opensource, webdev, contentmoderation, npm, profanityfilter

Back to Blog

Related posts

Read more »