Troubleshooting EFS Mount Failures in EKS: The IAM Mount Option Mystery

Published: (January 13, 2026 at 07:57 PM EST)
4 min read
Source: Dev.to

Source: Dev.to

TL;DR

If you see mount.nfs4: access denied by server while mounting 127.0.0.1:/ when mounting EFS volumes in EKS, and the security groups are correct, you’re likely missing the iam mount option in your PersistentVolume definition while using an EFS file system policy.

The Problem

While integrating a new reporting service into our EKS cluster that needed to write reports to a shared EFS filesystem, the pod kept failing to mount with this cryptic error:

MountVolume.SetUp failed for volume "efs-pv": rpc error: code = Internal desc = Could not mount "{efs_id}:/"
Output: mount.nfs4: access denied by server while mounting 127.0.0.1:/

The Investigation Journey

Initial Suspicions (All Wrong)

Theory 1: Security Group Issues

  • Verified NFS traffic (TCP 2049) allowed between worker nodes and EFS mount targets.
  • Mount targets existed in all Availability Zones.

Result: Security groups were perfect. Not the issue.

Theory 2: EFS File System Policy

  • Recently added an IAM‑based file system policy to restrict access.
  • Policy included conditions like aws:PrincipalArn to whitelist specific IAM roles.

Breakthrough: Removing the policy made it work!

The Eureka Moment

Reading the AWS EFS troubleshooting documentation revealed:

If you don’t add the iam mount option with a restrictive file system policy, then the pods fail with the following error message:
mount.nfs4: access denied by server while mounting 127.0.0.1:/

Root Cause Analysis

1. EFS File System Policy Conditions

We used aws:PrincipalArn in our policy conditions:

{
  "Condition": {
    "ArnLike": {
      "aws:PrincipalArn": [
        "arn:aws:iam::123456789012:role/worker-node-role",
        "arn:aws:iam::123456789012:role/efs-csi-driver-role"
      ]
    }
  }
}

Problem: According to AWS docs, aws:PrincipalArn and most IAM condition keys are NOT enforced for NFS client mounts to EFS. Only these condition keys work:

  • aws:SecureTransport (Boolean)
  • aws:SourceIp (String – public IPs only)
  • elasticfilesystem:AccessPointArn (String)
  • elasticfilesystem:AccessedViaMountTarget (Boolean)

2. Missing IAM Mount Option

Our PersistentVolume definition omitted the iam mount option:

# BEFORE – Missing iam mount option
apiVersion: v1
kind: PersistentVolume
metadata:
  name: efs-pv
spec:
  storageClassName: aws-efs-csi-sc
  csi:
    driver: efs.csi.aws.com
    volumeHandle: "{efs_id}"

Without iam, the EFS CSI driver doesn’t authenticate using IAM roles, so any file system policy with IAM restrictions fails.

3. The EFS Mount Flow

When using the EFS CSI driver with the tls mount option:

  1. Node‑level mount happens first (via the worker node IAM role).
  2. Without iam → Anonymous NFS mount.
  3. With iam → Authenticated mount using IAM role credentials.

The Solution

Fix 1: Add iam to mountOptions

# AFTER – With iam mount option
apiVersion: v1
kind: PersistentVolume
metadata:
  name: efs-pv
spec:
  storageClassName: aws-efs-csi-sc
  mountOptions:
    - tls   # Encryption in transit
    - iam   # Enable IAM authentication
  csi:
    driver: efs.csi.aws.com
    volumeHandle: "{efs_id}"

Fix 2: Use Only Supported EFS Condition Keys

If you need a file system policy, restrict it to the supported conditions:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": { "AWS": "*" },
      "Action": [
        "elasticfilesystem:ClientMount",
        "elasticfilesystem:ClientWrite",
        "elasticfilesystem:ClientRootAccess"
      ],
      "Resource": "arn:aws:elasticfilesystem:us-east-1:123456789012:file-system/{efs_id}",
      "Condition": {
        "Bool": {
          "elasticfilesystem:AccessedViaMountTarget": "true",
          "aws:SecureTransport": "true"
        }
      }
    }
  ]
}

This policy:

  • Requires TLS encryption (aws:SecureTransport).
  • Requires access via mount targets (elasticfilesystem:AccessedViaMountTarget).
  • Uses only supported condition keys.
  • Relies on security groups for network‑level access control.

Key Learnings

  1. IAM Mount Option is Required for IAM Authorization
    Without -o iam, EFS mounts are anonymous. Any IAM‑based file system policy will deny access.

  2. Not All IAM Conditions Work with EFS
    Only four condition keys are enforced for NFS mounts. Using others creates a false sense of security.

  3. Layer Your Security Properly

    • Network Layer: Security groups (who can reach mount targets).
    • IAM Layer: IAM policies on roles (what actions are allowed).
    • File System Layer: EFS policy (additional restrictions).
  4. Read the Error Logs Carefully
    The error mentions 127.0.0.1 because the EFS mount helper creates a local stunnel proxy for TLS. The actual failure occurs at the IAM authorization layer, not the network layer.

  5. Test Mount Operations Manually
    SSH to a worker node and test the mount with the EFS mount helper:

    sudo mount -t efs -o tls,iam {efs_id}:/ /mnt/test

    This validates the configuration outside of Kubernetes.

Conclusion

What seemed like a complex IAM policy issue turned out to be a missing mount option. The key insight was that EFS file system policies require explicit IAM authentication via the iam mount option, and that most IAM condition keys don’t apply to NFS mounts.

Back to Blog

Related posts

Read more »