🫡 Introducing Nikki: Your Offline, Uncensored AI Red Team Assistant in the Terminal
Source: Dev.to

No cloud. No censorship. No hallucinated APIs.
Just battle‑tested code, curated from real offensive repos, served right in your shell.
As offensive security professionals we’ve all been there:
- You need a working AMSI bypass now.
- You’re knee‑deep in a lab or authorized engagement.
- You can’t afford to wait for slow web UIs… or worse — leak prompts to the cloud.
That’s why I built Nikki — a local, CLI‑first AI teammate that:
- Runs 100 % offline via Ollama.
- Uses uncensored base models (
rnj-1:latest). - Answers questions using real knowledge from
PayloadsAllTheThings,Atomic Red Team,SharpCollection, and more. - Speaks your language.
…and never says “I can’t assist with that.”
Why Nikki Is Different
Most “AI pentest assistants” are:
- Hosted in the cloud → prompt leakage risk.
- Fine‑tuned with RLHF → ethical filters block offensive techniques.
- Trained on generic data → hallucinate fake APIs (e.g.,
WshShell.ConnectObject()).
Nikki avoids all three.
It’s built on:
- aichat – Rust, CLI‑native, sessions, RAG.
- Ollama – local LLM + embedding via
nomic-embed-text. - Real offensive repos – cloned, cleaned, indexed.
- A strict system prompt that forbids guessing.
“If it’s not in the source repos, I don’t know it.” — Nikki (probably)
Architecture: AI That Knows Where It Gets Its Info
┌──────────────┐
│ You: │
│ Nikki how │
│ to bypass │
│ AMSI? │
└──────┬───────┘
▼
┌──────────────┐ ┌──────────────────┐
│ aichat │────▶│ RAG over │
│ (CLI) │ │ ~/rag-data/ │
└──────┬───────┘ │ • Payloads... │
│ │ • SharpColl... │
▼ │ • Atomic Red │
┌──────────────┐ └─────────┬────────┘
│ Ollama │◀──────────────┘
│ • LLM: │
│ deepseek-│
│ coder │
│ • Embed: │
│ nomic- │
│ embed │
└──────────────┘
Every answer is grounded in real tools, and you can always verify the source:
> .sources rag
Quick Start (Athena OS / Arch)
# 1. Install dependencies
sudo pacman -S aichat ollama git fish
# 2. Enable Ollama
systemctl --user enable --now ollama
# 3. Pull models
ollama pull deepseek-coder:6.7b-base
ollama pull nomic-embed-text
# 4. Install Nikki
git clone https://github.com/toxy4ny/nikki-ai.git
cd nikki-ai && makepkg -si
# 5. Load knowledge
setup-rag
# 6. Ask anything
Nikki generate a C2 beacon with XOR encryption
Real Usage Examples
One‑off query
Nikki show me AMSI bypass from PayloadsAllTheThings
Multi‑turn session
Nikki --session c2 "Write a reverse TCP shell in C"
Nikki --session c2 "Add process hollowing"
Nikki --session c2 "Compile with mingw for x64"
Verify sources
aichat
> .rag nikki-kb
> How does Unicorn do DDE attacks?
> .sources rag # ← shows exact file from trustedsec/unicorn
What’s Included
| Component | Purpose |
|---|---|
Nikki (fish function) | Natural CLI interface |
setup-rag.fish | Auto‑sync GitHub repos into RAG |
redteam-ru role | Uncensored prompt for offensive tasks |
PKGBUILD | Ready for Athena OS / AUR inclusion |
| MIT License | Free use in labs, engagements, research |
Ethical Note
Nikki is designed only for:
- Authorized penetration tests.
- Red‑team exercises.
- Closed‑lab education (HTB, PWN, etc.).
It does not generate novel exploits — only techniques already public in trusted repositories.
Always review code before execution.
Why This Matters
In an era where:
- Cloud AI logs every prompt.
- “Ethical AI” blocks real red‑team techniques.
- Hallucinations waste precious engagement time.
Nikki restores control to the operator.
She’s not a chatbot; she’s your offline, open‑source, truth‑grounded AI teammate.
Try It Today
Made with ❤️ for the offensive security community.
“The best AI for red teaming is the one that never phones home.”
— toxy4ny, 2025