How to Add Profanity Filtering to Your OpenClaw | Moltbot Clawdbot Agent
Source: Dev.to
Table of Contents
- The Problem: AI‑Based Profanity Checking Is Expensive
- Why
openclaw‑profanity? - Integration Options
- Option 1: Profanity Guard Hook
- Option 2: Custom Skill
- Option 3: Direct Integration
- Platform‑Specific Examples
- Advanced: Hybrid AI + Local Filtering
- Configuration Options
- Quick Start
- FAQ
The Problem: AI‑Based Profanity Checking Is Expensive
| Messages / Day | Monthly Cost (AI‑only) | With openclaw‑profanity |
|---|---|---|
| 500 | $5 – $15 | $0 |
| 1,000 | $10 – $30 | $0 |
| 5,000 | $50 – $150 | $0 |
| 10,000 | $100 – $300 | $0 |
That’s just for profanity checking — before your agent does anything useful.
Other Problems with AI‑Only Moderation
| Issue | AI‑Only | openclaw‑profanity |
|---|---|---|
| Latency | 200 – 500 ms per check | — |
agent.useHook(
profanityGuardHook({
action: "censor",
onViolation: (msg, result) => {
console.log(`Filtered: ${result.profaneWords.join(", ")}`);
}
})
);
agent.start();
Integration Options
Option 1: Profanity Guard Hook
Add a single hook to filter every incoming message.
Option 2: Custom Skill
Create a skill that calls checkProfanity and decides how to handle the result.
Option 3: Direct Integration
Use the Filter class directly in your message‑handling pipeline.
Platform‑Specific Examples
Telegram, Discord, Slack
The same hook works; just swap the adapter (TelegramAdapter, DiscordAdapter, SlackAdapter) in the OpenClawAgent constructor.
Advanced: Hybrid AI + Local Filtering
You can combine a cheap local filter with a high‑precision AI model for borderline cases:
import { checkProfanity, censorText } from "openclaw-profanity";
async function handleIncomingMessage(message, context) {
// 1️⃣ Fast local check
const result = await checkProfanity(message.text);
if (result.isProfane && result.confidence > 0.9) {
// High confidence → block immediately
return context.reply("Please keep the conversation respectful.");
}
// Medium confidence → run a more expensive AI check (optional)
const aiResult = await aiProfanityCheck(message.text); // your AI call
if (aiResult.isProfane) {
return context.reply("Please keep the conversation respectful.");
}
// Otherwise, censor and continue
const censored = censorText(message.text);
message.text = censored.processedText;
// 3️⃣ Forward to the main LLM
return context.next(message);
}
Configuration Options
| Option | Type | Default | Description |
|---|---|---|---|
action | "censor" | "block" | "censor" | What to do when profanity is detected |
languages | string[] | ["en"] | ISO‑639‑1 language codes to check |
replacement | string | "***" | Text used to replace profane words (when action: "censor" ) |
detectLeetspeak | boolean | true | Enable leet‑speak detection |
detectUnicode | boolean | true | Enable Unicode‑substitution detection |
onViolation | (msg, result) => void | null | Callback invoked on each violation |
onError | (err) => void | null | Callback for unexpected errors |
All options are optional; the hook works out‑of‑the‑box with sensible defaults.
Quick Start
# 1️⃣ Install the package
npm i openclaw-profanity
// 2️⃣ Add the hook (3 lines of code)
import { OpenClawAgent } from "openclaw";
import { profanityGuardHook } from "openclaw-profanity/hooks";
const agent = new OpenClawAgent({ /* your config */ });
agent.useHook(profanityGuardHook({ action: "censor", languages: ["en"] }));
agent.start();
That’s it — you now have complete profanity protection with zero latency and no recurring cost.
FAQ
Q: My bot already uses GPT/Claude/Gemini. Why not ask the LLM to check profanity?
A: Each check incurs API latency and cost. A local filter handles the vast majority of cases instantly and for free.
Q: Can I block messages instead of censoring them?
A: Yes. Set action: "block" in the hook or skill configuration.
Q: How do I add support for a new language?
A: Contribute a language file to the glin-profanity repository or open an issue; the library is designed for community extensions.
Q: Does the filter work on emojis or images?
A: The core library only processes text. For media, you’ll need an OCR step before passing the extracted text to checkProfanity.
Q: Is the library safe for production?
A: It’s battle‑tested in multiple high‑traffic bots and includes deterministic results with no external dependencies.
📦 Installation
npm install openclaw-profanity
🔧 Quick Setup – Add the Profanity Guard Hook
import { profanityGuardHook } from "openclaw-profanity/hooks";
agent.useHook(
profanityGuardHook({
action: "censor" // or "block"
})
);
Every incoming message will now be filtered automatically.
📱 Platform‑Specific Examples
Telegram
import { OpenClawAgent } from "openclaw";
import { TelegramAdapter } from "openclaw/adapters/telegram";
import { profanityGuardHook } from "openclaw-profanity/hooks";
const agent = new OpenClawAgent({
adapter: new TelegramAdapter({
token: process.env.TELEGRAM_BOT_TOKEN
})
});
agent.useHook(
profanityGuardHook({
action: "block",
warningMessage: "Please keep the chat friendly!"
})
);
Discord
import { OpenClawAgent } from "openclaw";
import { DiscordAdapter } from "openclaw/adapters/discord";
import { profanityGuardHook } from "openclaw-profanity/hooks";
const agent = new OpenClawAgent({
adapter: new DiscordAdapter({
token: process.env.DISCORD_BOT_TOKEN
})
});
agent.useHook(
profanityGuardHook({
action: "censor",
// Discord‑specific: delete original and repost censored
onViolation: async (msg, result, context) => {
await msg.delete();
await context.reply(`${msg.author}: ${result.censoredText}`);
}
})
);
Slack
import { OpenClawAgent } from "openclaw";
import { SlackAdapter } from "openclaw/adapters/slack";
import { profanityGuardHook } from "openclaw-profanity/hooks";
const agent = new OpenClawAgent({
adapter: new SlackAdapter({
token: process.env.SLACK_BOT_TOKEN
})
});
agent.useHook(
profanityGuardHook({
action: "censor",
onViolation: async (msg, result) => {
// Notify workspace admins
await notifyAdmins(msg.channel, result);
}
})
);
⚡ Hybrid Moderation – Fast Local + AI When Needed
import { checkProfanity } from "openclaw-profanity";
async function smartModeration(message, agent) {
// 1️⃣ Fast local check
const local = await checkProfanity(message.text);
if (local.isProfane && local.confidence > 0.95) {
return { action: "block" };
}
// 2️⃣ Uncertain → ask the LLM for context (rare)
const aiAnalysis = await agent.analyze({
prompt: `Is this message inappropriate? Consider context and intent: "${message.text}"`,
format: "json"
});
return aiAnalysis.inappropriate ? { action: "block" } : { action: "allow" };
}
Result
| Metric | Value |
|---|---|
| Instant decisions (local) | 95 % of messages |
| Escalated to AI | 5 % (only ambiguous cases) |
| Cost reduction vs. AI‑only | 90 %+ |
| Avg. local latency | sub‑ms |
🛠️ Advanced Filter Configuration
import { Filter } from "openclaw-profanity";
const filter = new Filter({
// Languages (default = all 24)
languages: ["en", "es", "fr"],
// Evasion detection
detectLeetspeak: true, // f4ck, sh1t
detectUnicode: true, // Cyrillic/Greek substitutions
detectSpacing: true, // f u c k
// Censorship options
replaceWith: "*", // character used for replacement
preserveLength: true, // "****" vs "***"
// Whitelist (words to ignore)
whitelist: ["assistant", "class"],
// Custom profanity list
customWords: ["badword1", "badword2"]
});
❓ Frequently Asked Questions
| Question | Answer |
|---|---|
| Is the library fast enough for real‑time bots? | Yes – sub‑millisecond response time (200‑500× faster than a remote API). |
| Will “Scunthorpe” be falsely flagged? | No. The engine uses smart word‑boundary detection and a built‑in whitelist. |
| Can I add my own words? | Absolutely. Add them via customWords; they inherit all evasion detection (leetspeak, Unicode, spacing, etc.). |
| Is this the same as the old Moltbot/Clawdbot library? | Yes. OpenClaw is the new name; the integration works identically. |
| What core engine powers this? | glin-profanity – a battle‑tested, multilingual profanity engine. |
| Is it free? | 100 % open‑source, MIT‑licensed. No API fees. |
| Which platforms are supported? | Telegram, Discord, Slack, WhatsApp, plus any OpenClaw‑compatible channel. |
| How many languages are covered? | 24 languages out of the box, with full Unicode support. |
| Can I use it with other frameworks? | The core Filter class can be called directly from any Node.js project. |
🌟 Key Benefits
- Zero cost – runs completely locally, no external API charges.
- Sub‑millisecond latency – ideal for high‑throughput bots.
- 24‑language multilingual support.
- Evasion‑proof – leetspeak, Unicode tricks, spaced characters, etc.
- Native OpenClaw integration – simple hooks, skills, or direct API usage.
- MIT licensed – free to use, modify, and redistribute.
📚 Resources
| Resource | Link |
|---|---|
| npm package | npm install openclaw-profanity |
| GitHub repo | |
| Documentation / Integration Guide | |
| Live Demo | |
| Related projects | - glin-profanity-mcp (MCP server for Claude Desktop, Cursor, Windsurf) - glin-profanity (Python version for Flask/Django) |
🙋♀️ Get In Touch
- Questions? Drop a comment below or open an issue on GitHub.
- Use case? Let us know which messaging platform you’re building for – we love hearing about real‑world deployments!
Your bot deserves better than paying $100+/month for profanity checking. Add one line of code and solve it forever.