Using GKE to Build a Developer-First Internal Platform

Architecting a Secure and Efficient IDP on GKE
For platform engineers and security experts, the journey to a robust Internal Developer Platform (IDP) on GKE starts with a focus on core capabilities: providing the easiest, quickest, most stable, and most secure way to deploy and manage applications and infrastructure. This foundational layer is where Google Cloud’s opinionated GitOps tools truly shine.
At the heart of our foundation for IDPs is Git as your single source of truth. Every application, infrastructure definition, and configuration — typically expressed as Helm charts or Kustomize overlays — lives in a Git repository. This ensures all changes are version-controlled, access-controlled, auditable, and repeatable.
Unifying Deployment and Configuration with Config Sync
Our GKE native approach to deployments and cluster configurations is managed seamlessly via Config Sync. This powerful GitOps controller continuously reconciles the desired state defined in your Git repositories with the live state of your GKE clusters. Commonly, Config Sync is documented for managing applications but when combined with other operators, it’s central to managing:
- Application Deployments: Ensuring your microservices are always deployed as specified in Git.
- Cluster-wide Configurations: Applying consistent RBAC, network policies, and resource quotas across your clusters.
- Multi-tenancy Management: Setting up isolated namespaces for different teams or environments, ensuring each is configured precisely as intended.
- Infrastructure Deployments: if you can define your infrastructure as a Kubernetes manifest, Config Sync will look after it for you.
By using Config Sync, you gain immense stability and traceability, knowing that any divergence from your Git-defined state is automatically remediated, and every change has a clear history.
Self-Service Infrastructure Provisioning with Config Controller
Providing a secure and self-service way for teams to provision infrastructure is critical. This is where Config Controller steps in. Config Controller offers a declarative, Kubernetes-native approach to provisioning and managing Google Cloud resources directly from your GKE cluster. Instead of developers needing direct Cloud Console access, Terraform or complex scripts, they can define their infrastructure requirements (like Cloud SQL databases, Memorystore instances, or Pub/Sub topics) right in YAML manifests alongside their application code.
Config Controller allows platform teams to:
- Define Standardised Resources: Curate a catalogue of approved infrastructure components.
- Control Provisioning: Manage how and where resources are created without exposing sensitive cloud credentials or permissions to developers.
- Integrate with GitOps: Combine Config Controller with Config Sync for a unified GitOps workflow, where infrastructure changes are also driven by Git.
This empowers development teams to provision what they need, quickly and safely, while maintaining strong governance and control for security and platform engineers.
Enforcing Security and Compliance with Policy Controller
An essential part of running apps in today’s modern and complex Web environments is security and compliance. This shouldn’t be an afterthought, and with GKE, it’s built in with Policy Controller. Powered by Open Policy Agent (OPA) Gatekeeper, it is the enforcement arm of your IDP. It allows security experts and platform engineers to define and enforce granular policies across your GKE clusters. This scope is massive, but some basic examples include:
- Image Verification: Ensuring only approved container images from trusted registries can be deployed.
- Resource Quotas: Preventing resource exhaustion and ensuring fair usage across namespaces.
- Network Policy Enforcement: Controlling inter-service communication and isolating workloads.
- Labelling Conventions: Mandating consistent metadata for better organisation and cost tracking.
- Custom Policies: Defining bespoke rules tailored to your organisation’s specific security and compliance requirements.
Policy Controller basically acts as a guardian, preventing and auditing misconfigurations and ensuring that deployments adhere to your established security posture, significantly reducing risk and improving stability.
Centralised Secrets Management with the Secret Manager CSI Driver
We all have to deal with passwords or keys all the time, and it can be extremely difficult to manage them effectively. For this sort of sensitive information, like database credentials, relying on direct injection is extremely useful. For this, the Google Secret Manager CSI Driver is the recommended method. It allows your GKE workloads to securely access secrets stored in Google Secret Manager by dynamically mounting them into your pods as a volume. This means secrets never live as native Kubernetes Secret objects, reducing their exposure.
Benefits of the Secret Manager CSI Driver:
- Enhanced Security: Secrets are fetched directly from Secret Manager at runtime, not stored in the cluster’s etcd or your Git repository.
- Dynamic Updates: Rotated secrets in Secret Manager can be automatically updated within running pods, meaning less downtime for credential refreshes.
- Fine-Grained Access Control: Access is managed via Google Cloud IAM, aligning with the principle of least privilege through Workload Identity.
Observability
Observability is fundamental to any IDP. GKE integrates with Google Cloud Operations Suite (Cloud Monitoring, Cloud Logging, Cloud Trace) and supports open-source stacks like Prometheus, Grafana, Loki, and Jaeger. Instrumentation and dashboard creation can be automated through templates so that every new service ships with default alerts, logs, and runtime metrics for swift troubleshooting and operational insights.
Environments and Workflow Designed for Control and Velocity
GKE’s flexibility allows for diverse environment strategies, whether isolated namespaces within a single cluster or separate clusters for different criticality levels. With configurations managed in Git, applying environment-specific settings via overlays is straightforward and handled by Config Sync.
This approach means developers submit changes via pull requests to Git. Config Sync picks up approved changes, Config Controller provisions any necessary infrastructure, Policy Controller ensures everything adheres to your organisation’s security and governance rules, and the Secret Manager CSI Driver injects secrets securely at runtime. This streamlined workflow enhances both developer velocity and the operational control desired by platform and security teams.
A Real-World Scenario: Focused on Foundation
Consider a FinTech company prioritising rapid, secure, and stable deployment.
Their platform team’s primary objective is to empower engineers to deploy and manage services and infrastructure independently, with strong guardrails. They establish Git repositories as the definitive source for all application and infrastructure configurations.
- Config Sync is deployed on GKE, constantly monitoring these repositories and ensuring that all cluster configurations and application deployments match the Git state.
- When developers need a new database, they simply define a CloudSQLInstance resource in their application’s Git repository. Config Controller detects this, provisions the database on Google Cloud, and links it with the necessary database users.
- Crucially, for the database password, the application pods leverage the Google Secret Manager CSI Driver. Developers configure a SecretProviderClass referencing the Secret Manager secret, and their application’s Service Account, via Workload Identity, is granted precise IAM permissions to access just that secret. The password is then mounted directly into the pod’s filesystem, never residing in Git or the cluster’s state.
- Policy Controller is configured to enforce strict rules: only approved container images can be deployed, namespaces have predefined resource quotas, and network policies are automatically applied to isolate services, ensuring compliance and security across all deployments.
The result is a highly efficient and secure deployment pipeline where:
- Engineers can quickly and easily deploy applications and provision infrastructure through a familiar Git workflow.
- Platform and security teams maintain rigorous control over policies, governance, and base configurations, ensuring compliance and reducing operational overhead.
This foundational approach on GKE delivers the stability, security, and velocity essential for modern application development. 🚀
Example Manifest Files for the Real-World Scenario
Here are example manifest files for each element of the real-world scenario, specifically incorporating the Google Secret Manager CSI Driver. Remember to replace placeholders like GCP_PROJECT_ID.
Please also do not use this example for production. This is very high-level and is just to give an idea of what it would look like.
1. (Prerequisite) Google Secret Manager Secret
You’d first need to create your secret in Google Secret Manager.
Note: my-app-db-password-sm is the name of your secret in Google Secret Manager. Make a note of your Google Cloud Project ID as you’ll need it.
2. Kubernetes SecretProviderClass (Managed by Config Sync)
This manifest tells the Secret Manager CSI driver which secret to fetch from Google Secret Manager. This file would be part of your Git repository, perhaps in a platform/secrets/ directory, managed by Config Sync.
Explanation:
- provider: gcp: Specifies the Google Cloud provider for the Secrets Store CSI Driver.
- parameters.secrets.resourceName: This is the full resource path to your Secret Manager secret, including your project ID and the secret’s name and version. latest is often used for dynamic updates.
- parameters.secrets.path: This defines the filename that will contain the secret’s value within the mounted volume in your pod. So, the database password will be available at /mnt/secrets-store/db-password (assuming /mnt/secrets-store is your mount point).
3. Application Deployment (Using the CSI Driver, Managed by Config Sync)
Your application’s Deployment manifest will define a volume mount that consumes the SecretProviderClass.
Key Changes and Notes:
- spec.template.spec.serviceAccountName: my-app-sa: The pod must use a Kubernetes Service Account that is correctly configured for Workload Identity to impersonate a Google Cloud Service Account with permissions to access Secret Manager.
- volumeMounts and volumes: These define how the CSI driver mounts the secrets into the pod. The pathdefined in the SecretProviderClass (db-password) will appear as a file within the mountPath (/mnt/secrets-store).
- env: Your application would then read the password from the specified file path (e.g., in Python: with open(os.environ[‘DB_PASSWORD_FILE’], ‘r’) as f: db_password = f.read().strip()).
4. CloudSQL Instance Provisioning (Managed by Config Controller)
The CloudSQLInstance manifest no longer needs to explicitly manage the password in Kubernetes, as the application will fetch it via the CSI driver. This would still live in your app-repository/my-nginx-app/infrastructure/ directory.
Note: While the user block is here, the password field is omitted because the application is now responsible for fetching the password from the mounted secret provided by the CSI driver, rather than Config Controller injecting it into a Kubernetes Secret. The CloudSQLInstance resource mainly defines the database instance and user creation, not the application’s consumption of the password.
5. Kubernetes Service Account & IAM Bindings (Managed by Config Sync & Config Controller)
These manifests establish Workload Identity, linking a Kubernetes Service Account to a Google Cloud Service Account with the necessary Secret Manager access. These would typically reside in your central platform-repository/iam/ directory.
Remember to replace GCP_PROJECT_ID with your actual Google Cloud Project ID.
6. Policy Controller Constraint (Enforcing Approved Images, Managed by Config Sync)
This ensures only images from your trusted registries can be deployed.
Final Thoughts
Internal Developer Platforms aren’t about hiding complexity — they’re about exposing only what’s necessary. GKE provides a solid foundation to build these platforms, offering the scalability of Kubernetes with the operational simplicity of a managed service. When combined with GitOps, infrastructure-as-code, and a developer-focused portal, you get a platform that enables teams to move quickly and safely.
For engineers, GKE enables a future where provisioning a database, deploying a service, or rolling back a change is no longer a ticket, but a commit. With the right tooling and processes, you can offer this level of self-service without compromising security, consistency, or performance.
If you’re building an IDP, start with the tools developers already understand. Build guardrails, not gates. Automate the boring bits. Let GKE take care of the rest.
Looking to accelerate your IDP journey? Mesoform helps platform teams deliver powerful, developer-centric experiences on top of GKE. Get in touch to see how we can help you turn Kubernetes into a true internal platform your engineers will love.