This article is the third in a series exploring the construction of Internal Developer Platforms (IDPs) using managed Kubernetes services. The first instalment delved into Google Kubernetes Engine (GKE), highlighting its native capabilities. Our second article then traversed the Azure landscape, focusing on Azure Kubernetes Service (AKS) and its integrated services. Now, in this final part, we turn our attention to Amazon Elastic Kubernetes Service (EKS), examining its indigenous tooling and where the powerful, extensible nature of Crossplane can bridge any gaps, culminating in a comparative summary across all three hyperscalers.
Forging a Secure and Efficient Developer Platform on Amazon Elastic Kubernetes Service
For platform engineers and security specialists, building a robust Internal Developer Platform (IDP) on Amazon Elastic Kubernetes Service (EKS) is about striking a balance: delivering unparalleled ease of use, swift deployment cycles, rock-solid stability, and impregnable security. This foundational layer, rooted deeply in AWS’s extensive suite of native services, marks the true commencement of your journey to an exemplary IDP.
At the very heart of this platform lies the immutable principle of Git as the single source of truth. Every application definition, infrastructure blueprint, and cluster configuration — typically articulated via Helm charts or Kustomize overlays — finds its definitive home within a Git repository. This commitment ensures an impeccable audit trail, predictable deployments, and consistent, version-controlled states.
Bringing your desired state from Git to life on your EKS clusters is the domain of Flux, a powerful GitOps tool. While AWS doesn’t offer a direct, fully managed “GitOps extension” like Azure Arc or Config Sync, Flux can be deployed as an EKS Add-on (if available for specific versions/features) or self-managed within your cluster. Flux continuously synchronises your cluster’s actual state with its declared state in Git. It’s the bedrock for managing:
With Flux-driven GitOps, you’ll benefit from unparalleled stability and a clear lineage of changes, as any deviation from your Git-controlled blueprint is swiftly reconciled.
Granting teams the power to self-serve infrastructure, securely and efficiently, is a cornerstone of a mature IDP. This is where AWS Controllers for Kubernetes (ACK) enters the picture. ACK is a collection of Kubernetes operators from AWS that allow you to provision and manage AWS resources (such as Amazon RDS databases or AWS IAM roles) directly from your EKS cluster using familiar Kubernetes manifests.
ACK extends your Kubernetes API, enabling developers to express their infrastructure needs in familiar YAML, while platform teams maintain firm oversight of the underlying AWS accounts and permissions through the ACK controller’s own IAM roles. This provides a consistent Kubernetes-native control plane for managing both applications and their dependent AWS services.
This fusion accelerates development whilst preserving robust governance and control for your security and platform engineers.
Security and compliance, as we’ve iterated, are intrinsically woven into the fabric of your platform. Open Policy Agent (OPA) Gatekeeper, a popular open-source tool, is the sentinel of your IDP for in-cluster policy enforcement. This handles rules within the cluster, such as ensuring all container images originate from approved registries.
However, when it comes to managing AWS-level Policy Assignments — those overarching policies that govern behaviour across your AWS accounts or specific resources — the native AWS Controllers for Kubernetes (ACK) currently don’toffer dedicated Custom Resources. For instance, ACK can create IAM Roles and Policies directly, but it doesn’t provide a Kubernetes-native way to assign broader Service Control Policies (SCPs) at the AWS Organizations level, or to manage the widespread assignment of general IAM Policies across various principals or non-ACK-managed resources.
This is precisely where Crossplane with its AWS Provider proves invaluable. By providing Custom Resources like Policy (for IAM Policies) and potentially PolicyAssignment or Organization related CRs (if using the separate AWS Organizations Provider), Crossplane brings the entire lifecycle of these high-level policy assignments directly under your Kubernetes control plane.
By managing OPA Gatekeeper policies for in-cluster governance and using Crossplane for AWS-level Policy Assignments, every tweak to your security posture is version-controlled in Git, passes through familiar pull request workflows, and is automatically reconciled by Kubernetes, thus significantly mitigating risk and bolstering stability.
For sensitive information, such as database credentials, direct injection into running pods is paramount. The AWS Secrets Manager Container Storage Interface (CSI) Driver is the go-to solution. It enables your EKS workloads to securely access secrets stored in AWS Secrets Manager by dynamically mounting them into your pods as a volume. This crucial detail means secrets are never persisted as native Kubernetes Secretobjects, drastically reducing their exposure.
Observability is a bedrock of any resilient IDP. EKS seamlessly integrates with Amazon CloudWatch for comprehensive metrics and logs, and AWS X-Ray for distributed tracing. Many teams also opt for battle-tested open-source stacks like Prometheus and Grafana for metrics, Loki or Fluent Bit for logs, and Tempo or Jaeger for tracing. Instrumentation and dashboard creation can be templated, ensuring every new service comes with default alerts, logs, and runtime metrics straight out of the box for swift diagnostics.
EKS offers robust node group configurations, including Managed Node Groups and Fargate profiles for serverless compute, enabling dynamic management of your underlying infrastructure. Managed Node Groups automate patching and scaling, while Fargate removes the need to provision and manage EC2 instances entirely. This markedly simplifies cluster management, enforces AWS’s recommended practices by default, and offers predictable billing — making it ideal for less critical workloads and freeing your platform team to focus on higher-value engineering.
EKS’s inherent flexibility supports diverse environment strategies, whether isolated namespaces within a single cluster or distinct clusters for varying criticality levels. With configurations managed in Git and deployed via Flux, applying environment-specific settings through overlays or Helm value files is a straightforward affair.
This meticulously engineered workflow dictates that developers submit changes via pull requests to Git. Flux then deploys these changes. AWS Controllers for Kubernetes (ACK) provisions any necessary cloud infrastructure and manages IAM, Crossplane applies overarching AWS Policies, while the AWS Secrets Manager CSI Driver securely injects sensitive credentials at runtime. This cohesive approach significantly boosts both developer velocity and the operational control cherished by platform and security teams.
Consider a large retail company embarking on a microservices transformation, with a keen eye on rapid, secure, and stable deployments on AWS.
Their platform team’s core mission is to empower engineers to deploy and manage services and infrastructure autonomously, all while operating within robust guardrails. They designate Git repositories as the definitive source for every application and infrastructure configuration.
This foundational blueprint on EKS delivers the stability, security, and velocity indispensable for modern application delivery. 🚀
Here are example manifest files, using ACK for AWS infrastructure and IAM roles, Flux for GitOps, the AWS Secrets Manager CSI driver, OPA Gatekeeper, and Crossplane for AWS Policy assignments. Remember to replace placeholders like YOUR_AWS_ACCOUNT_ID, YOUR_AWS_REGION, YOUR_EKS_CLUSTER_OIDC_PROVIDER_URL, and resource names as appropriate.
1.1 AWS Secrets Manager Secrets & Flux install
You'd first need to create your secret in AWS Secrets Manager.
# Create a secret in AWS Secrets Manager
aws secretsmanager create-secret --name "my-app-db-password-sm" --secret-string "your_strong_db_password_for_eks" --region YOUR_AWS_REGION
Note: my-app-db-password-sm is the name of your secret in AWS Secrets Manager.
1.2 Flux install
eksctl enable flux --config-file <config-file>
Where the config-file would be something like:
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: cluster-12
region: eu-north-1
# other cluster config ...
gitops:
flux:
gitProvider: github # required. options are github, gitlab or git
flags: # required. arbitrary map[string]string for all flux args.
owner: "dr-who"
repository: "our-org-gitops-repo"
private: "true"
branch: "main"
namespace: "flux-system"
path: "clusters/cluster-12"
team: "team1,team2"
Refer to https://eksctl.io/usage/gitops-v2 for more information
Before you can apply the policy manifests, you must have OPA Gatekeeper installed on your EKS cluster. The most common and recommended method is to use its official Helm chart. This step should be performed once per cluster.
First, add the Gatekeeper Helm repository:
helm repo add gatekeeper https://open-policy-agent.github.io/gatekeeper/charts
helm repo update
Next, install Gatekeeper using Helm. It's recommended to install it in its own namespace (gatekeeper-system).
helm install gatekeeper gatekeeper/gatekeeper --namespace gatekeeper-system --create-namespace
This command installs the Gatekeeper admission controller and the necessary Custom Resource Definitions (CRDs) into the gatekeeper-system namespace, which will allow your cluster to understand and enforce the policy manifests you apply later.
2. Flux GitRepository and Kustomization (Kubernetes Manifests)
These manifests configure Flux to pull from your Git repositories and apply changes. You would typically kubectl apply these to your EKS cluster after Flux is installed (e.g., via Helm).
# platform-repository/flux-config/app-repo.yaml
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: GitRepository
metadata:
name: app-source
namespace: flux-system # Flux components namespace
spec:
interval: 1m
url: https://github.com/your-org/your-app-repo.git # Replace with your application Git repo
ref:
branch: main
# Optional: For private repos
# secretRef:
# name: flux-git-credentials # K8s Secret containing git credentials
---
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
name: app-kustomization
namespace: flux-system
spec:
interval: 5m
path: "./apps/dev-team-a" # Path within your Git repo to the application manifests
prune: true
sourceRef:
kind: GitRepository
name: app-source
targetNamespace: dev-team-a # The namespace where apps will be deployed
3. Application Deployment (Using AWS Secrets Manager CSI Driver)
This manifest defines a simple Nginx deployment that will get its secrets via the AWS Secrets Manager CSI driver. This would reside in your application's Git repository (e.g., app-repository/apps/dev-team-a/nginx-deployment.yaml), pulled by Flux.
# app-repository/apps/dev-team-a/nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
namespace: dev-team-a # Ensure this namespace exists
labels:
app: my-nginx
spec:
replicas: 2
selector:
matchLabels:
app: my-nginx
template:
metadata:
labels:
app: my-nginx
spec:
serviceAccountName: my-app-eks-sa # Kubernetes Service Account for IRSA
containers:
- name: nginx
image: nginx:1.23.3 # Example image, to be enforced by OPA Gatekeeper
ports:
- containerPort: 80
volumeMounts:
- name: secrets-store-inline
mountPath: "/mnt/secrets-store" # Mount point for secrets
readOnly: true
env:
- name: DB_PASSWORD_FILE_PATH # Your app reads the password from this file
value: "/mnt/secrets-store/db-password"
volumes:
- name: secrets-store-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: my-db-secret-provider # Link to the SecretProviderClass
4. Kubernetes SecretProviderClass for AWS Secrets Manager (Managed by Flux/GitOps)
This resource defines which AWS Secrets Manager secrets your pods will access. This would be alongside your application deployment manifests in Git (e.g., app-repository/apps/dev-team-a/secret-provider-class.yaml).
# app-repository/apps/dev-team-a/secret-provider-class.yaml
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: my-db-secret-provider
namespace: dev-team-a
spec:
provider: aws
parameters:
objects: |
- objectName: my-app-db-password-sm # Name of the secret in AWS Secrets Manager
objectType: secretsmanager
jmesPath: "password" # If your secret is JSON like {"password": "abc"}, extract "password" key
path: db-password # The filename within the mounted volume
5. Amazon RDS Database Provisioning (ACK DBInstance Custom Resource)
This ACK Custom Resource will provision an Amazon RDS database. You'd typically deploy the ACK RDS Controller itself to your cluster first. This manifest would be in your infrastructure Git repository (e.g., infra-repository/rds-db.yaml), managed by Flux.
# infra-repository/rds-db.yaml
apiVersion: rds.services.k8s.aws/v1alpha1
kind: DBInstance
metadata:
name: my-app-rds-db # Name for the RDS instance in AWS
namespace: dev-team-a # Namespace where the ACK resource lives
spec:
dbInstanceClass: db.t3.micro
engine: postgres
engineVersion: "14.7"
allocatedStorage: 20
dbName: myappdb
masterUsername: appuser # This user will be managed by RDS
masterUserPassword: # This will create a secret in Secrets Manager for the master user
secretKeyRef:
name: my-app-db-password-sm # ACK Secrets Manager controller can manage this secret
key: password # Key within the secret
skipFinalSnapshot: true # Set to false for production!
# Other necessary fields like DBSubnetGroupName, VPCSecurityGroupIDs, etc.
# For full details, refer to ACK RDS Controller documentation.
Note: The masterUserPassword.secretKeyRef here assumes you're leveraging the ACK Secrets Manager controller to manage this password directly within Secrets Manager, or at least referencing a secret that already exists there.
6. AWS IAM Role for Service Accounts (IRSA) (ACK Role and Policy Custom Resources)
These manifests create the IAM role and attach a policy, configured for IRSA, managed by the ACK IAM Controller. This would be in your platform-repository/aws-iam/ directory, managed by Flux.
# platform-repository/aws-iam/my-app-irsa-role.yaml
apiVersion: iam.services.k8s.aws/v1alpha1
kind: Role
metadata:
name: my-app-eks-irsa-role # Name of the IAM Role in AWS
namespace: dev-team-a # Namespace where the Role CR lives
spec:
name: my-app-eks-irsa-role # Explicitly set the name for the AWS resource
assumeRolePolicyDocument: |
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::YOUR_AWS_ACCOUNT_ID:oidc-provider/oidc.eks.YOUR_AWS_REGION.amazonaws.com/id/YOUR_EKS_CLUSTER_OIDC_PROVIDER_URL"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"oidc.eks.YOUR_AWS_REGION.amazonaws.com/id/YOUR_EKS_CLUSTER_OIDC_PROVIDER_URL:sub": "system:serviceaccount:dev-team-a:my-app-eks-sa"
}
}
}
]
}
description: "IAM role for my-app-eks-sa to access AWS Secrets Manager"
---
apiVersion: iam.services.k8s.aws/v1alpha1
kind: Policy
metadata:
name: my-app-sm-access-policy # Name of the IAM Policy in AWS
namespace: dev-team-a
spec:
name: my-app-sm-access-policy # Explicitly set the name for the AWS resource
description: "Allows reading specific secrets from Secrets Manager"
policyDocument: |
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"secretsmanager:GetSecretValue",
"secretsmanager:DescribeSecret"
],
"Resource": "arn:aws:secretsmanager:YOUR_AWS_REGION:YOUR_AWS_ACCOUNT_ID:secret:my-app-db-password-sm-*" # Allow access to the specific secret
}
]
}
---
apiVersion: iam.services.k8s.aws/v1alpha1
kind: RolePolicyAttachment
metadata:
name: my-app-sm-access-role-attachment # Name of the attachment in AWS
namespace: dev-team-a
spec:
policyRef:
from:
name: my-app-sm-access-policy # Refers to the Policy created above by ACK
roleRef:
from:
name: my-app-eks-irsa-role # Refers to the Role created above by ACK
Important:
7. Kubernetes Service Account (Linked to IAM Role)
This is your Kubernetes Service Account for the application pod, annotated to use the IAM role created by ACK. This manifest would be in your app-repository/apps/dev-team-a/ directory, pulled by Flux.
# app-repository/apps/dev-team-a/my-app-sa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-app-eks-sa
namespace: dev-team-a
annotations:
# This annotation links the Kubernetes SA to the AWS IAM Role
# ACK manages the existence of this role.
eks.amazonaws.com/role-arn: "arn:aws:iam::YOUR_AWS_ACCOUNT_ID:role/my-app-eks-irsa-role"
8. OPA Gatekeeper ConstraintTemplate and Constraint (Kubernetes Manifests)
These manifests define and apply your in-cluster policies for image enforcement. You'd typically install Gatekeeper itself via Helm. These policies would be in your platform-repository/policies/ directory, managed by Flux.
# platform-repository/policies/k8s-allowed-images-template.yaml
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8sallowedrepos
annotations:
description: >-
Requires container images to come from an allowed list of repositories.
spec:
crd:
spec:
names:
kind: K8sAllowedRepos
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8sallowedrepos
violation[{"msg": msg}] {
container := input.review.object.spec.containers[_]
allowed := input.parameters.repos[_]
not startswith(container.image, allowed)
msg := sprintf("image '%v' comes from an unapproved repository. Allowed repos are %v", [container.image, input.parameters.repos])
}
violation[{"msg": msg}] {
container := input.review.object.spec.initContainers[_]
allowed := input.parameters.repos[_]
not startswith(container.image, allowed)
msg := sprintf("initImage '%v' comes from an unapproved repository. Allowed repos are %v", [container.image, input.parameters.repos])
}
---
# platform-repository/policies/k8s-allowed-images-constraint.yaml
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sAllowedRepos
metadata:
name: only-ecr-and-trusted-images
spec:
match:
kinds:
- apiGroups: ["apps"]
kinds: ["Deployment", "StatefulSet", "DaemonSet"]
namespaces:
- dev-team-a # Apply this policy to the dev-team-a namespace
parameters:
repos:
- "YOUR_AWS_ACCOUNT_ID.dkr.ecr.YOUR_AWS_REGION.amazonaws.com/" # Your ECR
- "public.ecr.aws/nginx/" # Official Nginx from Public ECR
- "k8s.gcr.io/" # Common Kubernetes images
9. AWS Policy (Crossplane Policy Custom Resource for General AWS IAM Policy)
This manifest defines a general-purpose AWS IAM Policy using Crossplane. This policy is defined via Kubernetes and can then be attached to various IAM principals (users, roles, groups) or resources across your AWS account(s). This manifest would typically reside in your platform-repository/aws-policies/ directory, managed by Flux.
# platform-repository/aws-policies/enforce-cost-center-tag-policy.yaml
apiVersion: iam.aws.upbound.io/v1beta1
kind: Policy
metadata:
name: enforce-cost-center-tag-policy # Name for the IAM Policy in AWS
namespace: crossplane-system # Where Crossplane manages AWS resources
spec:
forProvider:
description: "Policy to enforce CostCenter tag on resources created across the account or specific OUs."
policy: |
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Action": "*",
"Resource": "*",
"Condition": {
"StringNotLike": {
"aws:TagKeys": "CostCenter"
}
}
}
]
}
providerConfigRef:
name: aws-provider-config
# Note: This Crossplane-managed IAM Policy can be used for broad enforcement.
# To make it effective, it needs to be attached to IAM entities (users, roles, groups)
# whose actions you want to restrict, or applied as a Service Control Policy (SCP)
# if you are using AWS Organizations and have the Crossplane AWS Organizations Provider installed.
# ACK's IAM Controller manages IAM Roles and Policies primarily for specific service integrations
# like IRSA; it does not manage arbitrary, broad policy assignments across your AWS accounts
# or Organizational Units (OUs) that are not tied to a specific ACK-managed resource.
This series has journeyed through the architectural nuances of building a Kubernetes-native, GitOps-driven Internal Developer Platform across Google Cloud's GKE, Microsoft Azure's AKS, and Amazon Web Services' EKS. While the core principles of Git as the single source of truth, declarative infrastructure, and automated policy enforcement remain constant, the specific tools and their level of "nativeness" vary significantly.
Ultimately, each cloud provider offers a compelling path to a GitOps-driven IDP on Kubernetes. The “most native” experience often comes with a degree of opinionation and bundling (GKE), while others provide more discrete, powerful building blocks that can be assembled with tools like ACK and Crossplane to fit very specific needs (EKS). The choice depends on your existing cloud footprint, organisational familiarity, and the desired level of abstraction and control — like ensuring your control plane is not on the same cloud as your workloads.
Want to accelerate your IDP journey? Mesoform helps teams design, build, and operate production-grade platforms on any cloud, without reinventing the wheel. Reach out to see how we can help.