Guardrails for AI-Generated IaC: How MyCoCo Made Speed Sustainable

Published: (December 6, 2025 at 05:24 PM EST)
4 min read
Source: Dev.to

Source: Dev.to

Cover image for Guardrails for AI-Generated IaC: How MyCoCo Made Speed Sustainable

TL;DR

The Problem: AI‑generated Terraform passes terraform validate but fails organizational compliance—missing tags, overly permissive IAM, exposed resources.

The Solution: Implement OPA‑based policy guardrails at the PR level that catch AI blind spots before code reaches production.

The Impact: MyCoCo reduced security findings from 47 to 3 per AI‑generated module while retaining 70 % of velocity gains.

Key Implementation: Custom OPA policies targeting common AI omissions: required tags, encryption enforcement, least‑privilege IAM.

Bottom Line: AI accelerates IaC development, but only with organizational context injected through automated policy enforcement.

OPA policy guardrails bridge the gap between what AI knows (public patterns) and what your organization requires (security, compliance, and governance standards)

The Challenge: MyCoCo’s AI Experiment

Jordan, MyCoCo’s Platform Engineer, was convinced AI would transform their infrastructure delivery. With a major product launch approaching, the platform team faced an impossible timeline: 30 new Terraform modules in six weeks. Using GitHub Copilot and Claude, Jordan’s team produced the modules in just two weeks.

“We were shipping infrastructure faster than ever. The AI understood Terraform syntax perfectly. Every module passed validation on the first try.”

Maya, the Security Engineer, ran a pre‑production Checkov scan. The results stopped the launch cold: 47 security findings per module on average—S3 buckets without encryption, Lambda functions with wildcard IAM permissions, and no required Environment, Owner, or CostCenter tags.

“The AI wrote syntactically perfect Terraform. But it had no idea about our tagging policies, naming conventions, or security baseline. It generated code like we were a greenfield startup, not a company preparing for SOC 2.”

Sam, the Senior DevOps Engineer, had warned the team from the start. The confidence gap was real—the team trusted AI‑generated code more than manually written code, despite having less visibility into its logic.

Alex, VP of Engineering, faced a choice: delay the launch to manually fix every module, or find a way to make AI‑generated code meet MyCoCo’s standards automatically.

The Solution: OPA Guardrails for AI‑Generated Code

MyCoCo’s solution wasn’t to abandon AI—it was to teach their pipeline what the AI didn’t know. The team implemented a three‑layer policy enforcement approach using Open Policy Agent (OPA) integrated with Conftest.

Layer 1: Required Tags Policy

The most common AI omission was resource tagging. The OPA policy blocks any PR missing required tags:

# policy/tags.rego
package terraform.tags

required_tags := ["Environment", "Owner", "CostCenter"]

deny[msg] {
    resource := input.resource_changes[_]
    resource.change.actions[_] == "create"

    tags := object.get(resource.change.after, "tags", {})
    missing := [tag | tag := required_tags[_]; not tags[tag]]

    count(missing) > 0
    msg := sprintf(
        "%s '%s' missing required tags: %v",
        [resource.type, resource.name, missing]
    )
}

Layer 2: Encryption Enforcement

AI‑generated S3 buckets and RDS instances frequently lacked encryption—a SOC 2 requirement:

# policy/encryption.rego
package terraform.encryption

deny[msg] {
    resource := input.resource_changes[_]
    resource.type == "aws_s3_bucket"
    resource.change.actions[_] == "create"

    # Check for server‑side encryption configuration
    not has_encryption_config(resource.address)

    msg := sprintf(
        "S3 bucket '%s' must have encryption enabled",
        [resource.name]
    )
}

Layer 3: IAM Least Privilege

The most dangerous AI pattern was wildcard IAM permissions. This policy catches overly permissive policies before they reach production:

# policy/iam.rego
package terraform.iam

deny[msg] {
    resource := input.resource_changes[_]
    resource.type == "aws_iam_policy"

    policy_doc := json.unmarshal(resource.change.after.policy)
    statement := policy_doc.Statement[_]

    statement.Effect == "Allow"
    statement.Action[_] == "*"

    msg := sprintf(
        "IAM policy '%s' contains wildcard Action - use least privilege",
        [resource.name]
    )
}

Pipeline Integration

The policies were integrated into the GitHub Actions workflow, running Conftest against every Terraform plan:

- name: Policy Check
  run: |
    terraform plan -out=tfplan
    terraform show -json tfplan > tfplan.json
    conftest test tfplan.json --policy policy/

Any policy violation blocks the PR merge, with clear error messages explaining exactly what needs to be fixed. Jordan found that AI assistants could often fix the violations when given the specific error message—turning the guardrail into a feedback loop.

Results: MyCoCo’s Transformation

Within three weeks of implementing OPA guardrails, MyCoCo’s metrics shifted dramatically:

MetricBeforeAfterReduction
Security findings per AI‑generated module47394 %
Development velocity~70 % of original speed gains
Tagging compliance (manual code)Gaps presentImproved across the board

“We stopped thinking of AI as a code generator and started thinking of it as a fast first draft. The guardrails aren’t a speed bump—they’re the quality gate that makes the speed sustainable.”

Maya added the policies to MyCoCo’s security documentation, creating an “AI‑Generated Code Checklist” that new team members review before using coding assistants. The launch proceeded on schedule, with infrastructure that passed SOC 2 audit on the first attempt.

Key Takeaways

  • Syntax validity does not equal security compliance. AI‑generated code that passes terraform validate may still fail 90 %+ of security requirements.
  • Organizational context is essential. Guardrails inject tagging, encryption, and least‑privilege policies that AI lacks.
  • Policy‑as‑code creates a feedback loop. Clear error messages let AI assistants (or developers) correct issues automatically.
  • Guardrails benefit all code. Once in place, manually written modules also become more compliant.
  • Sustainable speed requires automated quality gates. Combining AI assistance with OPA policies preserves velocity while ensuring compliance.
Back to Blog

Related posts

Read more »

Release my PR for the project Bifrost

Final Reflection – Implementing the “Enable/Disable API Key” Feature in Bifrosthttps://github.com/maximhq/bifrost/pull/992 When I first started working on this...