Building an Autonomous SOC Analyst Swarm with Python
Source: Dev.to
TL;DR
I built an Autonomous SOC Swarm where three specialized AI agents (Network, Identity, Threat Intel) collaborate to analyze security logs in real‑time. A Coordinator agent aggregates their votes and autonomously blocks threats or flags anomalies. This article covers the design, the Python implementation, and how I simulated a Mixture‑of‑Agents pattern for cybersecurity.
Introduction
In the world of Security Operations Centers (SOC), alert fatigue is real. Analysts burn out trying to triage thousands of events daily. I wondered:
Could I build a squad of AI agents that think like a seasoned security team?
In this experiment I moved beyond a single “chatbot” approach. I designed a swarm where each agent wears a specific hat—one watching the firewall, one checking user behavior, and one consulting threat intelligence. By making them vote, I aimed to reduce false positives and automate the boring stuff.
What This Article Is About
A technical walkthrough of building a Mixture‑of‑Agents (MoA) system for SOC automation. You’ll see:
- ECD (Event‑Context‑Decision) Architecture
- Python implementation of a voting mechanism
- A simulated “Live” dashboard in the terminal
Tech Stack
| Technology | Purpose |
|---|---|
| Python 3.12 | Core logic |
| Rich | Beautiful terminal UI |
| Mermaid.js | Visualizing agent thoughts |
| Pillow | Generate frame‑by‑frame forensics animation |
Why Read It?
If you’re interested in Multi‑Agent Systems or Cybersecurity Automation, this project bridges the gap. It’s not just theory; it’s a runnable simulation you can clone and extend. Plus, watching the agents “argue” over a verdict in the logs is pretty cool.
Let’s Design
Architecture Overview
The system follows a hub‑and‑spoke model. The Coordinator sits in the center, receiving inputs from specialized agents.
The Workflow
- Ingest – A log event arrives (e.g., SSH login).
- Analyze – All three agents analyze it in parallel.
- Vote – Each agent submits a verdict (
SAFE,SUSPICIOUS,MALICIOUS) and a confidence score. - Decide – The Coordinator weighs the votes and executes a response.
Agent Communication
Here’s the message flow when a suspicious event occurs:
Let’s Get Cooking
I started by defining the Agents. They’re modular so I can swap their “brains” (simple heuristics vs. LLMs) easily.
1. The Agents
# src/agents.py
from typing import Any, Dict, List
class BaseAgent:
def __init__(self, name: str):
self.name = name
def analyze(self, log: Dict[str, Any]) -> Dict[str, Any]:
raise NotImplementedError
NetworkAgent
class NetworkAgent(BaseAgent):
def analyze(self, log: Dict[str, Any]) -> Dict[str, Any]:
# Simple heuristic for port‑scan detection
if log.get("event_type") == "port_scan":
return {
"agent": self.name,
"verdict": "malicious",
"confidence": 0.95,
"reason": f"Port scan detected from {log['source_ip']}"
}
return {"agent": self.name, "verdict": "safe", "confidence": 0.90}
CoordinatorAgent
class CoordinatorAgent(BaseAgent):
def aggregate_votes(self, votes: List[Dict[str, Any]]) -> Dict[str, Any]:
"""Mixture‑of‑Agents voting logic."""
score = 0
for vote in votes:
if vote["verdict"] == "malicious":
score += 2
elif vote["verdict"] == "suspicious":
score += 1
if score >= 3:
return {"final_verdict": "CRITICAL", "action": "BLOCK_IP"}
elif score >= 1:
return {"final_verdict": "WARNING", "action": "FLAG_FOR_REVIEW"}
else:
return {"final_verdict": "SAFE", "action": "MONITOR"}
In my opinion, this simple scoring system is often more robust than a single monolithic prompt, because it forces consensus.
2. The Orchestration
Below is a minimal loop that generates mock log data, feeds it to the agents, and lets the coordinator decide.
# src/main.py
import random
import time
from agents import NetworkAgent, CoordinatorAgent
# Instantiate agents
network = NetworkAgent(name="NetworkAgent")
coordinator = CoordinatorAgent(name="Coordinator")
def mock_log():
"""Generate a random log entry."""
events = ["ssh_login", "port_scan", "file_access"]
event = random.choice(events)
return {
"event_type": event,
"source_ip": f"10.0.{random.randint(0,255)}.{random.randint(1,254)}",
"user": f"user{random.randint(1,5)}"
}
def run():
while True:
log = mock_log()
vote = network.analyze(log) # In a full impl you’d call all agents
decision = coordinator.aggregate_votes([vote])
print(f"[{log['event_type']}] → {decision['final_verdict']} – {decision['action']}")
time.sleep(2)
if __name__ == "__main__":
run()
Running the script produces a live‑updating terminal view (enhanced with Rich in the full repo) that shows agents voting and the coordinator’s final action.
Wrap‑Up
- Mixture‑of‑Agents gives you a flexible, fault‑tolerant way to automate SOC tasks.
- The voting mechanism is easy to extend: add more agents, weight votes differently, or plug in LLM‑driven reasoning.
- The whole project (including richer UI, additional agents, and Docker support) is open‑source – feel free to clone, experiment, and contribute!
Happy hacking!
Autonomous SOC Swarm Demo
Below is a snippet that prints the thought process using rich.
# main.py
with Live(table, refresh_per_second=4) as live:
for incident in generator.generate_stream(count=15):
# ...
votes = [
network_agent.analyze(incident),
identity_agent.analyze(incident),
intel_agent.analyze(incident)
]
decision = coordinator.aggregate_votes(votes)
# ... print to table ...
This makes the tool feel like a real CLI product, which provides a great feedback loop during development.
Let’s Setup
Clone the repo
git clone https://github.com/aniket-work/autonomous-soc-swarm
cd autonomous-soc-swarm
Install dependencies
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
Let’s Run
Running the simulation is straightforward:
python main.py
I observed that as the swarm processes events, you can clearly see the False Positives being filtered out. For instance, a Failed Login might be flagged by the Identity Agent, but if the Network Agent sees no other traffic, the Coordinator might just flag it as a Warning rather than blocking the user entirely.
Example output
Closing Thoughts
Building this Autonomous SOC Swarm was a great exercise in agent orchestration. By splitting responsibilities, I created a system that is more explainable and easier to tune than a black‑box model.
In the future, I plan to connect this to real integration points like AWS GuardDuty or Splunk.
GitHub Repository: https://github.com/aniket-work/autonomous-soc-swarm
Disclaimer
The views and opinions expressed here are solely my own and do not represent the views, positions, or opinions of my employer or any organization I am affiliated with. The content is based on my personal experience and experimentation and may be incomplete or incorrect. Any errors or misinterpretations are unintentional, and I apologize in advance if any statements are misunderstood or misrepresented.



