AI as a Junior Platform Engineer: How I 'Onboard' Coding Agents

Published: (May 1, 2026 at 03:00 AM EDT)
4 min read
Source: Dev.to

Source: Dev.to

Introduction

The first time I started seriously using AI in my DevOps workflows, I made the same mistake I’ve seen many others make. When a new engineer joins a team, we don’t expect them to be productive immediately. We don’t just hand them access to production systems and ask them to be productive.

  • Context about the system
  • Documentation
  • Boundaries
  • A safe environment to contribute
  • Time to understand how things work

Without that, even a talented engineer will struggle. AI is no different. One of the biggest differences between good and bad AI output is context. Without context, an AI agent will give you generic answers—technically correct, but not aligned with your system, architecture, or constraints. This is where a context.md file becomes incredibly powerful.

  • How your infrastructure is structured
  • Naming conventions
  • Environments and workflows
  • Constraints (cost, security, compliance)
  • How Terraform modules are organized
  • What “good” looks like in your system

Once the AI has this context, its suggestions start to feel less generic and more like they belong to your system—just like a junior engineer who finally understands how things are wired.

Platform Context

Overview

This repository manages AWS infrastructure using Terraform. Primary workloads run on EKS clusters across dev, staging, and production environments.

Key Principles

  • Prefer managed services where possible
  • Minimize blast radius of changes
  • Avoid cross‑environment coupling
  • All changes must go through PR review

Terraform Structure

  • modules/ → reusable infrastructure components
  • envs/dev → development environment
  • envs/staging → staging environment
  • envs/prod → production environment

Naming Conventions

Resources follow: --
Example: prod-payments-eks

Guardrails

  • Never modify production directly
  • No terraform apply without PR approval
  • Avoid changes that trigger resource replacement unless explicitly required

Cost Constraints

  • Prefer smaller instance types unless justified
  • Autoscaling should always have upper limits defined

Security

  • IAM roles must follow least privilege
  • No wildcard permissions unless explicitly approved

Review Expectations

When reviewing a Terraform plan, focus on:

  • Resource replacements
  • Changes in networking or IAM
  • Scaling or cost implications
  • Cross‑module impact

What “Good” Looks Like

  • Small, isolated changes
  • Clear PR descriptions
  • Minimal blast radius

Working with AI as a Junior Platform Engineer

When onboarding a new engineer, we also define boundaries: what they should and should not do, where they can make changes, and what requires review. The same approach applies to AI.

AI should not:

  • Directly apply infrastructure changes
  • Bypass review processes
  • Make decisions that require operational judgment

These are intentional design choices. Like a new engineer, the goal is safe contribution, not maximum autonomy.

When a new engineer joins, we usually start them with:

  • Small changes
  • Pull requests
  • Code reviews
  • Guided feedback

This builds confidence and trust over time. The same model works extremely well with AI. Instead of letting AI operate directly on infrastructure, I treat it as a contributor to the PR workflow. It can:

  • Generate changes
  • Explain diffs
  • Highlight potential issues
  • Improve readability

The final decision still goes through human review, keeping the system safe while still benefiting from AI acceleration.

A junior engineer improves with feedback; AI systems also improve with iteration. When something is off, the answer is rarely “AI doesn’t work.” More often, it means the context was incomplete. Over time, refining context and expectations makes AI far more reliable—it starts behaving less like a random generator and more like a team member who understands the system.

Thinking of AI as a junior platform engineer changes how you design workflows. Instead of asking:

“What can this tool do?”

you start asking:

“How would I onboard someone into this system?”

That question naturally leads to:

  • Better context
  • Clearer boundaries
  • Safer workflows
  • More predictable outcomes

AI in DevOps doesn’t need to be treated as an autonomous operator. In many cases, it works best as a well‑onboarded junior engineer:

  • Guided by context
  • Constrained by guardrails
  • Contributing through safe workflows
  • Improving over time

The goal is not to replace engineers. It is to make systems easier to understand, safer to operate, and faster to evolve. Sometimes the best way to achieve that is not to give AI more power, but to onboard it more thoughtfully.

Curious to know what you think of this approach.

Originally published on Medium:

0 views
Back to Blog

Related posts

Read more »