Could AI in the Terminal Make Us Worse Engineers?
Source: Dev.to
Imagine this: an engineer with 10 years of experience builds a small script that translates natural language into shell commands. A month later, he can’t write tar -xzf from memory—a command he’s typed thousands of times. His brain, given the option, quietly stopped retaining what the tool could retrieve in under a second. Is this our future reality?
My Experiment
I wanted to see whether AI in the terminal would negatively impact me, so I built a zsh plugin called zsh‑ai‑cmd and used it daily for a month. The answer wasn’t the simple one I was hoping for.
The Workflow (Seductive)
# find all files larger than 100MB in home directory
- Press Enter.
- The plugin intercepts the line, gathers context (OS, cwd, available tools, git status, recent commands), ships it to an AI model, and replaces your input with:
find ~ -type f -size +100M -exec ls -lh {} \; # highlighted in green
- Press Enter again to execute, or Ctrl‑C to cancel.
The key design decision in _ai-cmd-accept-line is that it never auto‑executes:
# Do NOT call .accept-line — let the user review and press Enter again
return 0
You always see the command before it runs. This pattern can save you from dangerous outputs—e.g., an rm -rf /tmp/* that would have nuked active Unix sockets, or a chmod -R 777 . that would have broken SSH keys.
Seeing ≠ Understanding
“You see the command” isn’t the same as “you understand the command.”
And that’s where the degradation begins.
After a month of AI‑assisted usage:
| Command type | Effect |
|---|---|
Simple (ls, cd, grep) | No change |
| Complex, requiring real thought | No change |
| Mid‑level (commands you used to know but now don’t bother remembering) | Erosion – e.g., tar -xzf, awk '{print $3}', find -mtime |
The brain, being efficient, decides: why store what you can retrieve in a second?
This mirrors the well‑documented Google Effect (Sparrow et al., 2011): people are less likely to remember information when they know they can look it up. The terminal AI is the Google Effect, accelerated. Google forces you to formulate a query, scan results, and adapt the answer. The AI plugin shrinks the cognitive gap to a single Enter press.
Safety Checks – A Double‑Edged Sword
The plugin includes a safety check that scans generated commands against 23 dangerous patterns (e.g., rm -rf /, fork bombs, disk wipes, curl | sh, etc.):
dangerous_patterns=(
'*rm -rf /*'
'*dd if=* of=/dev/*'
'*curl *\|*sh*'
'*shutdown*'
# …
)
- Dangerous commands → highlighted red with a warning.
- Safe commands → highlighted green with
[ok].
This responsible design introduces a subtle problem: the green highlight creates trust. After seeing [ok] a hundred times, you stop reading the command and just press Enter.
The real near‑disasters involve commands that are syntactically valid but semantically wrong.
Example:find /var/log -mtime +7 -delete(missing-type f→ deletes directories too). No pattern list will catch that. No safety check will flag “technically correct but subtly dangerous.”
The safety check catches catastrophic failures, but not the slow, quiet kind—the commands that do 90 % of what you wanted and damage the other 10 %.
When the AI Is Unavailable
Picture this: you’re on a remote server, no plugin, no internet, and you need to extract an archive. You spend 15 seconds trying to recall tar syntax—a command you’ve used thousands of times—feeling genuine uncertainty.
The real question isn’t “does AI make you faster?” (yes) or “more productive?” (probably).
It’s what happens when the AI isn’t there.
- Your laptop dies.
- The API is down.
- You’re on an air‑gapped server in a datacenter.
These aren’t hypotheticals—they’re Tuesdays.
A tool that makes you faster when available but less capable when unavailable has a net effect that depends entirely on reliability. The reliability of external API calls (internet → cloud service) is definitionally less than the reliability of knowledge stored in your own head.
Historical Parallels
| Tool | Did it make us forget? | Outcome |
|---|---|---|
| IDEs | Language syntax? (Partially) | Yes |
| Stack Overflow | Algorithms? (Partially) | Yes |
| GPS | Navigation? (Research: yes – Dahmani & Bherer, 2020) | Yes |
| Calculators | Arithmetic? (Yes) | Society decided the trade‑off was worth it |
The calculator parallel is telling. We accepted that mental arithmetic declined because we could solve higher‑order problems without wasting cognitive load on multiplication.
Is
tar -xzfthe multiplication of system administration?
Should we feel fine outsourcing it to a machine so we can think about architecture, reliability, and design instead?
Maybe. But there’s a key difference:
- Calculator: deterministic, exact answer every time.
- AI command generator: probable answer—usually right, but sometimes subtly wrong.
When your calculator says 847, it’s 847. When your AI says find /var/log -mtime +7 -delete, it might be silently missing -type f.
When Outsourcing Is Actually Beneficial
There are classes of commands where the degradation argument falls apart entirely.
# list all pods with their sidecar container names
The AI returns:
kubectl get pods -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{range .spec.containers[*]}{.name}{"\t"}{end}{"\n"}{end}' | grep -i sidecar
Nobody has this memorized. Nobody should.
This is a nested jsonpath expression with range iterators, tab‑separated output formatting, and a pipeline filter. The syntax is hostile to human memory by design.
Another example:
kubectl get pods --all-namespaces -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.phase}{"\n"}{end}'
Again, a command that is impractical to memorize but extremely useful when generated on demand.
Takeaways
- AI‑assisted shells improve speed when the service is reachable.
- Reliance on AI erodes mid‑level command memory, leading to “knowledge at‑the‑point‑of‑need” rather than retained expertise.
- Safety highlights create trust, which can cause users to skip careful review.
- When the AI is unavailable, you may be left scrambling for basic commands you once knew.
- Historical precedents (calculators, GPS, IDEs) show societies can accept trade‑offs, but the probabilistic nature of AI adds a new risk layer.
- Outsourcing truly complex, non‑memorizable commands (e.g., intricate
kubectlpipelines) is a net win; outsourcing simple, stable commands (e.g.,tar -xzf) may be a net loss for long‑term expertise.
Final Question
Do we want a future where we can’t recall
tar -xzfwithout an internet connection, or do we accept that trade‑off in exchange for freeing mental bandwidth for higher‑level problems?
The answer will shape how we design, adopt, and guard the tools that sit at the very heart of our daily workflows.
Overview
The following command finds all pods in CrashLoopBackOff across every namespace:
kubectl get pods --all-namespaces -o json \
| jq -r '
.items[]
| select(.status.containerStatuses[]?.state.waiting.reason == "CrashLoopBackOff")
| .metadata.namespace + "/" + .metadata.name
'
It works by:
- Getting the full JSON representation of all pods.
- Using
jqto iterate over theitemsarray. - Safely accessing the nested field
status.containerStatuses[].state.waiting.reason(the?prevents errors when the field is missing). - Selecting only those pods whose reason is
"CrashLoopBackOff". - Concatenating the namespace and pod name for easy reading.
Writing this from scratch normally takes a few minutes of trial‑and‑error, schema lookup, and fiddling with jq syntax.
How AI Accelerates the Process
With an AI‑powered plugin you can simply type:
# find all crashing pods across all namespaces
and receive a ready‑to‑run command in under a second.
The degradation thesis—that reliance on AI erodes memory—applies mainly to recall tasks (e.g., remembering tar -xzf).
Commands like the kubectl examples above belong to a different category: composition. They are assembled from documentation each time, not memorized. Outsourcing composition to AI replaces a 10‑minute Stack Overflow search with a 1‑second generation, without harming the underlying knowledge.
More Real‑World Examples
1. Top memory‑consuming pods
kubectl top pods --all-namespaces --sort-by=memory | head -20
2. All ingress rules with back‑ends across namespaces
kubectl get ingress --all-namespaces -o jsonpath='
{range .items[*]}
{.metadata.namespace}{"\t"}
{.metadata.name}{"\t"}
{range .spec.rules[*]}
{.host}{"\t"}
{range .http.paths[*]}
{.path}{" -> "}{.backend.service.name}:{.backend.service.port.number}{"\n"}
{end}
{end}
{end}'
The JSONPath expression above is 270 characters of nested syntax. Memorizing it isn’t realistic; it’s a syntax‑assembly task. An engineer who understands Kubernetes networking isn’t “worse” for letting AI generate the JSONPath— they’re simply faster.
A Balanced Framework
| Guideline | Rationale |
|---|---|
| Use AI for recall, not for understanding | If you’ve typed tar -xzf countless times but can’t recall the flags today, let AI fill the gap. |
| Read first‑time‑use commands | When AI suggests a new pattern (e.g., find … -exec), study each flag and option before running. |
| Treat AI output as a starting point | Safety checks catch obvious hazards (rm -rf /) but miss context‑specific mistakes (rm -rf ./build vs. rm -rf ./build/cache). Always review. |
| Keep offline skills alive | Periodically type commands manually. Think of it like physical exercise: you don’t stop walking because cars exist. |
| Be honest about trade‑offs | You gain speed, but you may lose retention. Consider scenarios where you lack internet access or need deep understanding. |
| Acknowledge uncertainty | Long‑term studies on AI‑assisted CLI work are still pending. Short‑term experiments are data points, not definitive conclusions. |
Bottom Line
- AI tools work. They save time, reduce context‑switching, and increase productivity.
- They may gradually diminish the ability to perform the same tasks unaided. Whether that matters is a personal decision for each engineer.
The plugin will continue to be useful regardless of the outcome.
Introducing zsh-ai-cmd
zsh-ai-cmd is a Zsh plugin that translates natural‑language prompts into shell commands using AI (Anthropic Claude, OpenAI, or a local Ollama instance).
Key features:
- No external runtimes: only Zsh, curl, and jq.
- Works with any AI backend that accepts a simple HTTP request.
- Seamlessly integrates into your existing Zsh workflow.
# Example usage
% # show me the top 10 memory‑consuming pods sorted by usage
% zsh-ai-cmd "show me the top 10 memory-consuming pods sorted by usage"
kubectl top pods --all-namespaces --sort-by=memory | head -10
Give it a try and see how much faster you can get from idea to execution!