Copilot CLI as a Creative Engine: Building My TUI Trilogy
Source: Dev.to
This is a submission for the GitHub Copilot CLI Challenge
🚀 From Cyberpunk to Zen: A Trilogy of TUI Tools
For this challenge I didn’t just build one tool; I built a trilogy of Terminal User Interface (TUI) applications.
My goal was to leverage GitHub Copilot to transform the cold command line into a living, breathing digital ecosystem.
The journey evolved from a rigid, agent‑based approach to a fluid, context‑engineered workflow. Below is the result of that evolution.
1️⃣ The Netrunner Deck (Cyberpunk Monitor)
The “High‑Friction” Experiment
A cyberpunk‑themed network monitor that visualizes Docker containers and network connections as an ASCII topology.
- Matrix Rain effects on the side panels.
- Deterministic Placement – nodes are positioned using SHA‑256 hashing so your infrastructure always looks the same.
- Live Monitoring – real‑time Docker and network status updates.
2️⃣ Code Bonsai (Git Visualizer)
The “Freestyle” Experiment
A digital terrarium that transforms your Git repository into a growing ASCII bonsai tree:
- Trunk – thickness represents lines of code.
- Leaves – represent healthy files.
- Weather – the “sky” changes based on commit activity (sunny for active days, moonlit for dormant ones).
- Wildlife – “bugs” appear on branches proportional to linting errors.
3️⃣ Cozy Cabin (Terminal Wallpaper)
The “Context Engineering” Masterpiece
An interactive terminal background designed for focus and calm. It renders a “digital cabin window” with:
- Procedural Scenery – layered mountains and forests with live weather from OpenWeatherMap (clouds, rain, day/night cycles).
- System Shelf – a quirky take on system monitoring:
- 🔋 Battery Cat – naps when discharging, wakes up when plugged in.
- 🌿 RAM Plant – wilts if memory usage gets too high.
- 🔥 CPU Fireplace – flickers more intensely as your CPU load increases.
🏗️ Architecture & Workflow
The most interesting part of this challenge wasn’t just the code, but how my interaction with Copilot changed. I moved from using pre‑defined agents to a Context Engineering approach.
┌──────────────────────┐ ┌──────────────────────┐ ┌──────────────────────┐
│ Phase 1: Netrunner │ │ Phase 2: Code Bonsai│ │ Phase 3: Cozy Cabin │
│ (Rigid Agents) │ │ (Freestyle/Native) │ │ (Context First) │
└──────────┬───────────┘ └──────────┬───────────┘ └──────────┬───────────┘
│ │ │
▼ ▼ ▼
[Defined Persona] [Raw Prompts + Models] [Plan.md + Context]
│ │ │
▼ ▼ ▼
(Slow Iteration) (Fast but Chaotic) (Rapid & Reliable)
│ │ │
▼ ▼ ▼
"It works, but…" "80% There" "90% Ready Code"
🧰 The Tech Stack
| Component | Choice |
|---|---|
| Language | Python 3.12+ |
| TUI Framework | Textual |
| Rendering | Rich |
| Package Manager | uv |
| APIs & Libraries | Docker SDK, psutil, OpenWeatherMap API |
🎥 Demo
- The Netrunner Deck – Repository: – Demo:
- Code Bonsai – Repository: – Demo:
- Cozy Cabin – Repository: – Demo:
My Experience with GitHub Copilot CLI
Phase 1 – “By the Book” Approach
- Time Cost: 2 days
- Project: The Netrunner Deck
- Model(s): Claude Opus 4.5 (planning & coding) → switched to Sonnet to conserve premium requests.
- Takeaway: The workflow was slow; managing agents and rigid context consumed more time than actual coding.
Phase 2 – “Freestyle” Approach
- Time Cost: 1 day
- Project: Code Bonsai
- Model(s): Sonnet (planning) + Codex (implementation)
- Takeaway: Much faster. The CLI returned ~80 % of the desired code, but occasionally missed architectural nuance.
Phase 3 – “Context Engineer” Approach
- Time Cost: 1 day
- Project: Cozy Cabin
- Model(s): (not explicitly listed – focus on curated context)
- Takeaway: The sweet spot. A lightweight setup combined with deliberate Plan Review and Context Engineering yielded rapid, reliable results.
End of submission.
# Generate Code
**Model Usage**: I used **Codex** exclusively, relying on my engineered context to guide it.
**The Result**: This was the peak experience. The output adapted to **90 %** of my expectations. By focusing on the *plan* rather than the *persona*, Copilot became a true extension of my thought process.
---
## Conclusion: The “Greenfield” Flow
Based on this trilogy of experiments, here is my recommended workflow for starting a new project with Copilot CLI:
1. **Plan with Power**
Use high‑reasoning models like **Claude Opus** or **Gemini Pro** to create the initial plan.
2. **Review**
Review the plan carefully. *Note: The plan is saved in the chat session memory by default, not as a file in the repo, to keep the workspace clean.*
3. **Code with Speed**
Switch to **GPT Codex** for implementation. It’s faster and follows the context well.
4. **Feedback Loop**
Implement the code, check the layout/structure, and provide feedback.
5. **Initialize Context**
Use `/init` to generate project‑specific Copilot instructions.
6. **Optimized Instructions**
Use the [awesome‑copilot](https://github.com/github/awesome-copilot) plugin to find the best agent/instruction/prompt for your specific project type.
*Iterate*: Return to the planning step for the next feature. **Crucial**: Update the Copilot instructions frequently to keep the context fresh.
7. **Leverage MCP**
Use Model Context Protocol (MCP) tools like **Context7** (to check best practices & modern frameworks) and **Serena** (to optimize context indexing).
The Copilot CLI isn’t just a code generator; it’s a collaboration engine. My key takeaway is that **Plan > Config**. A well‑structured plan and curated context are far more effective than a heavily configured environment.
---
## 📚 References
- **Tools**:
- [GitHub Copilot CLI](https://github.com/features/copilot/cli)
- [Context7](https://context7.com/) (MCP)
- [Serena](https://github.com/oraios/serena) (MCP)
- **Inspiration**:
- [awesome‑copilot](https://github.com/github/awesome-copilot) (prompt engineering)
- [Copilot best practice](https://docs.github.com/en/copilot/how-tos/copilot-cli/cli-best-practices)
- **Libraries**:
- [Textualize](https://textualize.io/)