This article is the first in a series exploring the construction of Internal Developer Platforms (IDPs) using managed Kubernetes services. We will cover GKE, AKS, and EKS. First up, we’ll take a look at Google’s offering, GKE.
For platform engineers and security experts, the journey to a robust Internal Developer Platform (IDP) on GKE starts with a focus on core capabilities: providing the easiest, quickest, most stable, and most secure way to deploy and manage applications and infrastructure. This foundational layer is where Google Cloud’s opinionated GitOps tools truly shine.
At the heart of our foundation for IDPs is Git as your single source of truth. Every application, infrastructure definition, and configuration — typically expressed as Helm charts or Kustomize overlays — lives in a Git repository. This ensures all changes are version-controlled, access-controlled, auditable, and repeatable.
Our GKE native approach to deployments and cluster configurations is managed seamlessly via Config Sync. This powerful GitOps controller continuously reconciles the desired state defined in your Git repositories with the live state of your GKE clusters. Commonly, Config Sync is documented for managing applications but when combined with other operators, it’s central to managing:
By using Config Sync, you gain immense stability and traceability, knowing that any divergence from your Git-defined state is automatically remediated, and every change has a clear history.
Providing a secure and self-service way for teams to provision infrastructure is critical. This is where Config Controller steps in. Config Controller offers a declarative, Kubernetes-native approach to provisioning and managing Google Cloud resources directly from your GKE cluster. Instead of developers needing direct Cloud Console access, Terraform or complex scripts, they can define their infrastructure requirements (like Cloud SQL databases, Memorystore instances, or Pub/Sub topics) right in YAML manifests alongside their application code.
Config Controller allows platform teams to:
This empowers development teams to provision what they need, quickly and safely, while maintaining strong governance and control for security and platform engineers.
An essential part of running apps in today’s modern and complex Web environments is security and compliance. This shouldn’t be an afterthought, and with GKE, it’s built in with Policy Controller. Powered by Open Policy Agent (OPA) Gatekeeper, it is the enforcement arm of your IDP. It allows security experts and platform engineers to define and enforce granular policies across your GKE clusters. This scope is massive, but some basic examples include:
Policy Controller basically acts as a guardian, preventing and auditing misconfigurations and ensuring that deployments adhere to your established security posture, significantly reducing risk and improving stability.
We all have to deal with passwords or keys all the time, and it can be extremely difficult to manage them effectively. For this sort of sensitive information, like database credentials, relying on direct injection is extremely useful. For this, the Google Secret Manager CSI Driver is the recommended method. It allows your GKE workloads to securely access secrets stored in Google Secret Manager by dynamically mounting them into your pods as a volume. This means secrets never live as native Kubernetes Secret objects, reducing their exposure.
Observability is fundamental to any IDP. GKE integrates with Google Cloud Operations Suite (Cloud Monitoring, Cloud Logging, Cloud Trace) and supports open-source stacks like Prometheus, Grafana, Loki, and Jaeger. Instrumentation and dashboard creation can be automated through templates so that every new service ships with default alerts, logs, and runtime metrics for swift troubleshooting and operational insights.
GKE’s flexibility allows for diverse environment strategies, whether isolated namespaces within a single cluster or separate clusters for different criticality levels. With configurations managed in Git, applying environment-specific settings via overlays is straightforward and handled by Config Sync.
This approach means developers submit changes via pull requests to Git. Config Sync picks up approved changes, Config Controller provisions any necessary infrastructure, Policy Controller ensures everything adheres to your organisation’s security and governance rules, and the Secret Manager CSI Driver injects secrets securely at runtime. This streamlined workflow enhances both developer velocity and the operational control desired by platform and security teams.
Consider a FinTech company prioritising rapid, secure, and stable deployment.
Their platform team’s primary objective is to empower engineers to deploy and manage services and infrastructure independently, with strong guardrails. They establish Git repositories as the definitive source for all application and infrastructure configurations.
This foundational approach on GKE delivers the stability, security, and velocity essential for modern application development. 🚀
Here are example manifest files for each element of the real-world scenario, specifically incorporating the Google Secret Manager CSI Driver. Remember to replace placeholders like GCP_PROJECT_ID.
Please also do not use this example for production. This is very high-level and is just to give an idea of what it would look like.
You’d first need to create your secret in Google Secret Manager.
# Example: Create a secret in Secret Manager
# This would typically be a one-off step or managed by a separate automation (e.g., Terraform)
echo "your_super_secret_db_password" | gcloud secrets versions add my-app-db-password-sm --data-file=-
Note: my-app-db-password-sm is the name of your secret in Google Secret Manager. Make a note of your Google Cloud Project ID as you’ll need it.
This manifest tells the Secret Manager CSI driver which secret to fetch from Google Secret Manager. This file would be part of your Git repository, perhaps in a platform/secrets/ directory, managed by Config Sync.
# platform-repository/secrets/db-secret-provider.yaml
apiVersion: secrets-store.csi.k8s.io/v1
kind: SecretProviderClass
metadata:
name: my-db-secret-provider
namespace: dev-team-a # Must be in the same namespace as the application using it
spec:
provider: gcp
parameters:
secrets: |
- resourceName: "projects/GCP_PROJECT_ID/secrets/my-app-db-password-sm/versions/latest"
path: "db-password" # The filename within the mounted volume in the pod
Explanation:
Your application’s Deployment manifest will define a volume mount that consumes the SecretProviderClass.
# app-repository/my-nginx-app/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
namespace: dev-team-a
labels:
app: my-nginx
spec:
replicas: 2
selector:
matchLabels:
app: my-nginx
template:
metadata:
labels:
app: my-nginx
spec:
serviceAccountName: my-app-sa # Essential for Workload Identity
containers:
- name: nginx
image: nginx:1.23.3 # Policy Controller can enforce image restrictions here
ports:
- containerPort: 80
volumeMounts:
- name: secret-volume
mountPath: "/mnt/secrets-store" # The directory where secrets will be mounted
readOnly: true
env:
- name: DB_PASSWORD_FILE
value: "/mnt/secrets-store/db-password" # App reads password from this file
volumes:
- name: secret-volume
csi:
driver: secrets-store.csi.k8s.io # The CSI driver for secrets store
readOnly: true
volumeAttributes:
secretProviderClass: my-db-secret-provider # Links to the SecretProviderClass
Key Changes and Notes:
The CloudSQLInstance manifest no longer needs to explicitly manage the password in Kubernetes, as the application will fetch it via the CSI driver. This would still live in your app-repository/my-nginx-app/infrastructure/ directory.
# app-repository/my-nginx-app/infrastructure/cloudsql.yaml
apiVersion: sql.cnrm.cloud.google.com/v1beta1
kind: CloudSQLInstance
metadata:
name: my-app-database # The name of the Cloud SQL instance
namespace: dev-team-a
spec:
databaseVersion: POSTGRES_14
region: europe-west2 # Example region
settings:
tier: db-f1-micro # Smallest tier for example
ipConfiguration:
ipv4Enabled: true
requireSsl: true
backupConfiguration:
enabled: true
binaryLogEnabled: true
databaseFlags:
- name: cloudsql.iam_authentication
value: "On"
database:
- name: myappdb
user:
- name: myappuser # User for the application. Password handled by CSI.
host: "%"
# No password.valueFrom here; the app gets it directly via CSI.
Note: While the user block is here, the password field is omitted because the application is now responsible for fetching the password from the mounted secret provided by the CSI driver, rather than Config Controller injecting it into a Kubernetes Secret. The CloudSQLInstance resource mainly defines the database instance and user creation, not the application’s consumption of the password.
These manifests establish Workload Identity, linking a Kubernetes Service Account to a Google Cloud Service Account with the necessary Secret Manager access. These would typically reside in your central platform-repository/iam/ directory.
# platform-repository/iam/my-app-sa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-app-sa
namespace: dev-team-a
# This annotation links the Kubernetes SA to a GCP SA for Workload Identity
annotations:
iam.gke.io/gcp-service-account: my-app-gsa@GCP_PROJECT_ID.iam.gserviceaccount.com
---
# Define the Google Cloud Service Account using Config Controller
apiVersion: iam.cnrm.cloud.google.com/v1beta1
kind: IAMServiceAccount
metadata:
name: my-app-gsa # Name of the Google Cloud Service Account
namespace: config-controller-managed # Namespace where Config Controller manages GCP resources
spec:
displayName: "GSA for My App to access Secret Manager"
---
# Grant the Google Cloud Service Account permission to access the Secret Manager secret
apiVersion: iam.cnrm.cloud.google.com/v1beta1
kind: IAMPolicyMember
metadata:
name: my-app-gsa-secret-accessor-policy # A unique name for this policy binding
namespace: config-controller-managed
spec:
member: serviceAccount:my-app-gsa@GCP_PROJECT_ID.iam.gserviceaccount.com
role: roles/secretmanager.secretAccessor # The minimum role to read secret payloads
resourceRef:
apiVersion: secretmanager.cnrm.cloud.google.com/v1beta1
kind: SecretManagerSecret
name: my-app-db-password-sm # Reference to the Secret Manager secret Config Controller will manage
Remember to replace GCP_PROJECT_ID with your actual Google Cloud Project ID.
This ensures only images from your trusted registries can be deployed.
# platform-repository/policies/image-whitelist.yaml
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8sallowedimagerepositories
spec:
crd:
spec:
names:
kind: K8sAllowedImageRepositories
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8sallowedimagerepositories
violation[{"msg": msg}] {
container := input.review.object.spec.containers[_]
not startswith(container.image, data.inventory.namespace[input.review.object.metadata.namespace].k8sallowedimagerepositories.spec.parameters.repositories[_])
msg := sprintf("Image '%v' is not from an allowed repository.", [container.image])
}
violation[{"msg": msg}] {
container := input.review.object.spec.initContainers[_]
not startswith(container.image, data.inventory.namespace[input.review.object.metadata.namespace].k8sallowedimagerepositories.spec.parameters.repositories[_])
msg := sprintf("Image '%v' is not from an allowed repository.", [container.image])
}
---
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sAllowedImageRepositories
metadata:
name: only-gcr-and-dockerhub-official
spec:
match:
kinds:
- apiGroups: ["apps"]
kinds: ["Deployment", "StatefulSet", "DaemonSet"]
namespaces:
- dev-team-a # Apply this policy to the dev-team-a namespace
parameters:
repositories:
- "gcr.io/your-gcp-project-id/" # Your organisation's GCR
- "docker.io/nginx/" # Official Nginx images from Docker Hub
- "docker.io/google_containers/" # Other trusted images
Internal Developer Platforms aren’t about hiding complexity — they’re about exposing only what’s necessary. GKE provides a solid foundation to build these platforms, offering the scalability of Kubernetes with the operational simplicity of a managed service. When combined with GitOps, infrastructure-as-code, and a developer-focused portal, you get a platform that enables teams to move quickly and safely.
For engineers, GKE enables a future where provisioning a database, deploying a service, or rolling back a change is no longer a ticket, but a commit. With the right tooling and processes, you can offer this level of self-service without compromising security, consistency, or performance.
If you’re building an IDP, start with the tools developers already understand. Build guardrails, not gates. Automate the boring bits. Let GKE take care of the rest.
Looking to accelerate your IDP journey? Mesoform helps platform teams deliver powerful, developer-centric experiences on top of GKE. Get in touch to see how we can help you turn Kubernetes into a true internal platform your engineers will love.