Stop Drowning in YAML: Typed Config and Code Generation with CUE, KCL, Dhall, and CDKs in 2025
YAML has been the lingua franca of cloud-native configuration for a decade. It’s simple, ubiquitous, and human-readable—until it isn’t. As teams scale services, environments, clusters, and policies, YAML turns into a copy–paste swamp: thousands of lines, inconsistent values, duplicated logic, and fragile templating glued together with bash. Delivery slows, and reliability suffers.
In 2025, we have better tools and patterns. Typed configuration languages like CUE, KCL, and Dhall let you express constraints, defaults, and compositions directly, and generate the right YAML for each environment. Kubernetes-focused CDKs (CDK8s), multi-cloud IaC frameworks (Pulumi), policy-as-code (OPA/Gatekeeper, Kyverno, and CEL-based ValidatingAdmissionPolicy), and GitOps loops (Argo CD, Flux) round out a robust, testable, and auditable workflow.
This article is a practical, opinionated guide to moving from YAML sprawl to typed config and code generation. We’ll cover:
- Why YAML sprawl emerges and how to measure its cost
- The typed config toolkit: CUE, KCL, Dhall
- Generators and CDKs: CDK8s, Pulumi, and friends
- Policy-as-code and enforcement at PR time and admission
- Composition patterns for platform and app teams
- A GitOps-friendly workflow that keeps the robot honest
- A pragmatic migration plan you can start this quarter
If you care about velocity, reliability, and security—and you’re tired of diffing 900-line YAML files—read on.
Why YAML Sprawl Happens (and Why You Should Care)
YAML sprawl is the accumulation of configuration that’s tough to validate, evolve, or reuse. Common causes:
- Weak or implicit schemas. Many systems accept partial or flexible inputs. When schemas are implicit or undocumented, drift and surprises follow.
- Copy–paste reuse. Need a new environment? Copy the staging directory, tweak a dozen keys. Repeat this 50 times.
- Templating without types. Helm, Kustomize patches, ytt, and kpt are useful, but most offer limited type checking or constraint enforcement compared to real languages.
- Fragmentation across tools. Some values live in Helm values.yaml, others in Kustomize overlays, others in CI variables—often without a single source of truth.
- Human review bottlenecks. Without types and tests, you rely on reviewers to spot accidental breakage.
Symptoms:
- Long lead time for small changes: updating a common label requires touching dozens of files.
- Inconsistent security posture: some namespaces enforce PodSecurity, others slipped through.
- Environment drift: staging differs from prod in subtle, undocumented ways.
- Slow MTTR: incident fixes are risky because the blast radius is unclear.
The business cost is real. Teams burn cycles reconciling diffs and fixing fat-finger mistakes. Release frequency declines as fear of accidental breakage grows. A typed, generative approach attacks these root causes.
The Case for Typed Config and Code Generation
Typed configuration and code generation bring software engineering discipline to infrastructure and platform configuration:
- Types and constraints: Catch errors early. Encode invariants (e.g., memory requests <= limits, image tags pinned) instead of relying on reviewer vigilance.
- Composability: Build reusable modules for common patterns (web service, CronJob, Kafka consumer, etc.).
- Defaults and derivations: Set secure defaults once, derive secondary values, and forbid or warn on overrides.
- Testability: Unit-test modules and policies, run conformance checks in CI, and reproduce builds hermetically.
- Refactorability: Centralize and update patterns without editing hundreds of files.
- Documentation and IDE support: Generate docs from types and constraints, enable IDE autocompletion.
The net effect: faster safe changes, fewer regressions, and less cognitive load on reviewers.
The 2025 Stack: CUE, KCL, Dhall, and CDKs
Each of these tools tackles typed config from a different angle.
CUE
CUE is a constraint-based language for defining, validating, and generating data. Key ideas:
- Data plus schema in one: You can define a structure and fill it with values in the same document.
- Unification: Multiple partial definitions of a value are merged, and conflicts are errors.
- First-class constraints: Ranges, regexes, or relationships between fields are checked by the tooling.
- Strong Kubernetes ecosystem support: CUE is used for templating, validation, and generation workflows.
Example: a CUE module that defines a generic service contract and emits Kubernetes objects.
cue// app.cue package app // A high-level application contract for a containerized HTTP service. App: { name: string & =~"^[a-z0-9-]+$" image: string version: string replicas: >=1 & <=20 | *3 port: >=1 & <=65535 | *8080 env: [string]: string | *"" cpu: { request: string | *"200m" limit: string | *"500m" } memory: { request: string | *"256Mi" limit: string | *"512Mi" } labels: [string]: string | *"" annotations: [string]: string | *"" } // Derive Kubernetes manifests from the App contract. _k8s: { apiVersion: "apps/v1" kind: "Deployment" metadata: { name: App.name labels: App.labels } spec: { replicas: App.replicas selector: matchLabels: { app: App.name } template: { metadata: { labels: { app: App.name } & App.labels annotations: App.annotations } spec: containers: [{ name: App.name image: "\(App.image):\(App.version)" ports: [{ containerPort: App.port }] resources: { requests: { cpu: App.cpu.request, memory: App.memory.request } limits: { cpu: App.cpu.limit, memory: App.memory.limit } } env: [ for k, v in App.env { name: k, value: v } ] }] } } } // Similarly derive a Service _svc: { apiVersion: "v1" kind: "Service" metadata: name: App.name spec: { selector: { app: App.name } ports: [{ port: 80, targetPort: App.port }] } }
Use case input and export to YAML:
cue// prod.cue import "./app" app.App: { name: "catalog-api" image: "registry.example.com/catalog" version: "1.12.3" replicas: 5 env: { LOG_LEVEL: "info" } labels: { "app.kubernetes.io/part-of": "shop" "app.kubernetes.io/component": "api" } } output: [app._k8s, app._svc]
Then run:
bashcue export prod.cue -e output --out yaml > manifests.yaml
CUE validates constraints at export time. Mistakes (e.g., replicas: 1000, invalid name) fail fast.
KCL
KCL (Kusion Configuration Language) is a statically typed, policy friendly configuration language inspired by declarative paradigms with Python-like syntax. It aims at scalable, layered configuration with strong validation and policy features.
Highlights:
- Familiar syntax, strong type system, and schema validation
- Layering and patching semantics for environments and overrides
- Good Kubernetes integrations and policy capabilities
A KCL example for the same app pattern:
kclschema App: name: str image: str version: str replicas: int = 3 port: int = 8080 env: {str: str} = {} cpu: { request: str, limit: str } = { request: "200m", limit: "500m" } memory: { request: str, limit: str } = { request: "256Mi", limit: "512Mi" } labels: {str: str} = {} annotations: {str: str} = {} # Instantiate app = App{ name = "catalog-api" image = "registry.example.com/catalog" version = "1.12.3" replicas = 5 env = {"LOG_LEVEL": "info"} labels = { "app.kubernetes.io/part-of": "shop", "app.kubernetes.io/component": "api", } } # Render Kubernetes objects k8s_deploy = { apiVersion = "apps/v1" kind = "Deployment" metadata.name = app.name metadata.labels = app.labels spec = { replicas = app.replicas selector.matchLabels.app = app.name template = { metadata.labels.app = app.name metadata.labels = app.labels | {"app": app.name} metadata.annotations = app.annotations spec.containers = [{ name = app.name image = f"{app.image}:{app.version}" ports = [{containerPort = app.port}] resources = { requests = {cpu = app.cpu.request, memory = app.memory.request} limits = {cpu = app.cpu.limit, memory = app.memory.limit} } env = [ {name = k, value = v} for k, v in app.env ] }] } } } k8s_svc = { apiVersion = "v1" kind = "Service" metadata.name = app.name spec.selector.app = app.name spec.ports = [{port = 80, targetPort = app.port}] } output = [k8s_deploy, k8s_svc]
KCL tooling can export to YAML and enforce schema constraints. Layering lets you define base settings and patch them per environment.
Dhall
Dhall is a total, strongly typed functional configuration language. Totality means all programs terminate and imports are pure. That gives you reproducible builds, referential transparency, and static type checking across configs.
Highlights:
- Guarantees termination and purity
- Strong typing with parametric modules
- Import caching and integrity checks
- Mature Kubernetes ecosystem bindings
Example: Use Dhall to define a function from a typed App to Kubernetes resources, then convert with dhall-to-yaml.
dhall-- App.dhall let App = { name : Text , image : Text , version : Text , replicas : Natural , port : Natural , env : List { name : Text, value : Text } } let mkDeployment = \(a : App) -> { apiVersion = "apps/v1" , kind = "Deployment" , metadata = { name = a.name } , spec = { replicas = a.replicas , selector = { matchLabels = { app = a.name } } , template = { metadata = { labels = { app = a.name } } , spec = { containers = [ { name = a.name , image = a.image ++ ":" ++ a.version , ports = [ { containerPort = a.port } ] , env = a.env } ] } } } } let mkService = \(a : App) -> { apiVersion = "v1" , kind = "Service" , metadata = { name = a.name } , spec = { selector = { app = a.name } , ports = [ { port = 80, targetPort = a.port } ] } } in { App = App, mkDeployment = mkDeployment, mkService = mkService }
Use it:
dhall-- prod.dhall let tools = ./App.dhall let app : tools.App = { name = "catalog-api" , image = "registry.example.com/catalog" , version = "1.12.3" , replicas = 5 , port = 8080 , env = [ { name = "LOG_LEVEL", value = "info" } ] } in [ tools.mkDeployment app, tools.mkService app ]
Export:
bashdhall-to-yaml --omitNull < prod.dhall > manifests.yaml
Dhall’s appeal is airtight types and reproducibility. The trade-off is a steeper learning curve for teams unfamiliar with functional programming.
CDK8s and Pulumi for Kubernetes
Sometimes you need the full expressive power of a general-purpose language—rich conditionals, loops, test frameworks, and package ecosystems—particularly for infrastructure composition and abstractions.
- CDK8s: A Kubernetes-focused Cloud Development Kit. Define constructs in TypeScript, Python, Go, or Java and synthesize YAML.
- Pulumi: Multi-cloud IaC in real languages, with a Kubernetes provider. You can choose to apply directly or generate YAML for GitOps.
CDK8s example in TypeScript:
tsimport { App, Chart } from 'cdk8s'; import { KubeDeployment, KubeService } from 'cdk8s-plus-27'; class WebService extends Chart { constructor(scope: App, id: string, props: { name: string, image: string, version: string, replicas?: number, port?: number }) { super(scope, id); const port = props.port ?? 8080; new KubeDeployment(this, 'dep', { metadata: { name: props.name }, spec: { replicas: props.replicas ?? 3, selector: { matchLabels: { app: props.name } }, template: { metadata: { labels: { app: props.name } }, spec: { containers: [ { name: props.name, image: `${props.image}:${props.version}`, ports: [{ containerPort: port }], }, ], }, }, }, }); new KubeService(this, 'svc', { metadata: { name: props.name }, spec: { selector: { app: props.name }, ports: [{ port: 80, targetPort: port }], }, }); } } const app = new App(); new WebService(app, 'catalog', { name: 'catalog-api', image: 'registry.example.com/catalog', version: '1.12.3', replicas: 5, }); app.synth(); // emits YAML in dist/
CDKs shine when you need libraries, testing, or complex orchestration. The caution: don’t hide too much imperative logic in constructs; keep outputs deterministic for GitOps.
Policy-as-Code: Guardrails that Scale
Typed config reduces mistakes but doesn’t eliminate risk. Policy-as-code enforces organization-wide rules consistently:
- OPA Gatekeeper: Admission controller enforcing Rego policies with CRD-based constraints.
- Kyverno: Kubernetes-native policies expressed as YAML with JSONPath-like expressions.
- ValidatingAdmissionPolicy (VAP) using CEL: Built-in Kubernetes validating admission based on CEL expressions; GA in modern clusters.
- Conftest: Run Rego policies on files in CI (before they reach the cluster).
Examples:
Rego policy to require non-root containers and pinned images (no latest):
regopackage kubernetes.admission deny[msg] { input.kind.kind == "Deployment" some c container := input.review.object.spec.template.spec.containers[c] not container.securityContext.runAsNonRoot msg := sprintf("container %q must set runAsNonRoot", [container.name]) } deny[msg] { input.kind.kind == "Deployment" some c image := input.review.object.spec.template.spec.containers[c].image endswith(image, ":latest") msg := sprintf("container image %q must not use :latest", [image]) }
CEL ValidatingAdmissionPolicy to enforce resource requests/limits:
yamlapiVersion: admissionregistration.k8s.io/v1 kind: ValidatingAdmissionPolicy metadata: name: enforce-resources spec: paramKind: {} matchConstraints: resourceRules: - apiGroups: ["apps"] apiVersions: ["v1"] operations: ["CREATE", "UPDATE"] resources: ["deployments"] validations: - expression: "has(object.spec.template.spec.containers)" message: "containers required" - expression: " object.spec.template.spec.containers.all(c, has(c.resources) && has(c.resources.requests) && has(c.resources.limits) ) " message: "all containers must set requests and limits"
Run the same policies in CI with conftest or kyverno-cli to fail pull requests early.
Composition Patterns: From Low-Level Resources to Golden Paths
Successful platforms encode opinionated, reusable building blocks so application teams don’t manage low-level manifests directly.
Patterns to adopt:
- Contract-first modules: Define a minimal app contract (name, image, port, SLO hints) and emit the full set of K8s objects (Deployment, Service, HPA, PDB, NetworkPolicy). Keep the contract stable and versioned.
- Layered configuration: Base defaults, then environment overlays (prod/staging/dev) using KCL layers, CUE unification, or Dhall functions with records of overrides.
- Policy-backed defaults: Disallow disabling critical security defaults unless a documented exception exists.
- Composition types: Where possible, expose typed compositions rather than raw templates. For example, Crossplane’s CompositeResourceDefinitions (XRDs) for platform-level services.
- Deterministic generation: Ensure no timestamps or random values leak into outputs. Seed or strip them.
- Registry of building blocks: Package and version your modules (OCI artifacts, Git submodules, language package registries) with changelogs and migration notes.
Anti-patterns to avoid:
- Kitchen-sink inputs: Don’t expose every possible knob of Kubernetes. If teams want raw control, they can opt out—but default to simplicity.
- Hidden side effects: Generators that mutate state in external systems break hermetic builds. Keep generation pure; apply changes explicitly in deployment steps.
- Mixing templating layers: Helm over Kustomize over ytt over bash. Pick one typed layer for composition, keep the rest simple.
A GitOps-Friendly Workflow
GitOps—where the cluster state converges to what’s in Git—pairs well with typed config and generation when you structure the pipeline correctly.
Recommended flow:
- Source of truth
- Keep typed modules (CUE/KCL/Dhall or CDK code) in versioned repositories.
- Keep generated YAML in a separate, apply-only repo, or in a dedicated directory with clear provenance annotations.
- Deterministic code generation
- Use a reproducible container image for generation (pinned digests).
- Lock module dependencies (checksums) and capture versions in a lockfile.
- PR-time checks
- Run generators in CI and commit the synthesized manifests as part of the PR.
- Validate with schema checks, conftest/OPA, kyverno-cli, kubeconform, and kubeval-like tools.
- Run unit tests for modules and snapshot tests for generated YAML.
- Signing and provenance
- Sign generated artifacts (Sigstore/cosign) and include SLSA provenance.
- Annotate manifests with module versions, commit SHA, and build timestamp (as annotations only, not in labels used by selectors).
- Argo CD/Flux deployment
- Argo CD or Flux watches the manifests repo and applies changes.
- Admission policies (VAP/OPA/Kyverno) enforce runtime guardrails.
- Drift detection
- Enable drift alerts; out-of-band changes trigger reconciliation and a ticket.
- Observability and feedback
- Emit metrics: generation duration, policy violations, rollback frequency.
- Collect deployment SLOs: sync latency, error rate, change failure rate.
With this setup, a pull request shows both the typed input diff and the generated manifest diff, plus policy outcomes—reviewers gain context and confidence.
Migration Strategy: From YAML Sprawl to Typed Confidence
You don’t need a big-bang rewrite. Migrate incrementally.
- Inventory and measure
- Catalog your manifest sources: Helm charts, Kustomize overlays, raw YAML, operators.
- Measure pain: count duplicated files, time-to-review, and common incident causes.
- Pick a typed foundation
- If your team favors constraint-first, pick CUE or KCL.
- If your team prefers a typed FP style, evaluate Dhall.
- If your platform already relies on TypeScript/Python and you need libraries/tests, consider CDK8s or Pulumi to generate YAML.
- Start with validation
- Add kubeconform/OPA/Kyverno checks in CI on current YAML. This gives immediate safety without structural changes.
- Add admission policies (CEL VAP) for critical invariants.
- Wrap a small slice
- Select one service type (internal HTTP service) and implement a typed module that emits Deployment/Service/HPA/NetworkPolicy.
- Keep outputs API-compatible with current expectations to minimize risk.
- Adopt environment layering
- Define base contract and apply environment-specific overrides via CUE unification, KCL layers, or Dhall records.
- Remove Kustomize/Helm layering where replaced.
- Build a generation pipeline
- Add generator job to CI to synthesize manifests from typed inputs, commit to manifests repo, run policy checks, and open PRs.
- Expand coverage
- Add more constructs: CronJobs, Jobs, StatefulSets, PDBs, Ingress/Gateway API, RBAC, PodSecurity, NetworkPolicy, ExternalSecrets.
- Provide a migration guide for app teams with examples and a linter that flags legacy patterns.
- Decommission legacy paths
- Once coverage reaches 80%+, freeze old directories and lock Helm values. New services must use typed modules.
- Institutionalize patterns
- Document golden paths and provide a CLI or cookiecutter to scaffold projects.
- Add platform-level scorecards and dashboards for policy compliance.
Success criteria:
- Lead time for changes drops (hours to minutes)
- Policy violations are caught in PRs, not at runtime
- Reviewers comment on intent, not YAML shape
- Incident postmortems point to fewer config mistakes
End-to-End Example: CUE + GitOps + Policy
Let’s build a minimal example that you can adapt.
Project structure:
platform/ # typed modules and policies
cue.mod/
app.cue # contract + generation
policies/
conftest/
policies.rego
tools/
container/Dockerfile # hermetic generator image
services/
catalog/
prod.cue
staging.cue
manifests/ # output repo or directory
catalog/
prod/
staging/
- Define the contract and generator (CUE)
We already saw app.cue above. Add a few more secure defaults:
cue// Security defaults App: { runAsNonRoot: bool | *true allowPrivilegeEscalation: bool | *false } _k8s: spec: template: spec: containers: [{ securityContext: { runAsNonRoot: App.runAsNonRoot allowPrivilegeEscalation: App.allowPrivilegeEscalation capabilities: drop: ["ALL"] seccompProfile: type: "RuntimeDefault" } }]
- Service definitions
cue// services/catalog/prod.cue import "../../platform/app" app.App: { name: "catalog-api" image: "registry.example.com/catalog" version: "1.12.3" replicas: 5 env: { LOG_LEVEL: "info" } labels: { "app.kubernetes.io/part-of": "shop" "app.kubernetes.io/component": "api" "app.kubernetes.io/version": "1.12.3" } } output: [app._k8s, app._svc]
cue// services/catalog/staging.cue import "../../platform/app" app.App: { name: "catalog-api" image: "registry.example.com/catalog" version: "1.12.3-rc1" replicas: 2 env: { LOG_LEVEL: "debug" } labels: { "app.kubernetes.io/environment": "staging" } } output: [app._k8s, app._svc]
- CI pipeline
GitHub Actions snippet:
yamlname: Generate and Validate Manifests on: pull_request: paths: - 'platform/**' - 'services/**' jobs: build: runs-on: ubuntu-latest container: ghcr.io/yourorg/typed-generator:sha-abcdef steps: - uses: actions/checkout@v4 - name: Generate catalog prod run: | cue export services/catalog/prod.cue -e output --out yaml > manifests/catalog/prod/all.yaml - name: Generate catalog staging run: | cue export services/catalog/staging.cue -e output --out yaml > manifests/catalog/staging/all.yaml - name: Policy check (conftest) run: conftest test manifests --policy platform/policies/conftest - name: kubeconform run: kubeconform -summary -strict -ignore-missing-schemas manifests - name: Commit synthesized manifests uses: stefanzweifel/git-auto-commit-action@v5 with: commit_message: "chore: synth manifests"
- Admission policy backup
Install a ValidatingAdmissionPolicy to catch any gap if something slips through:
yamlapiVersion: admissionregistration.k8s.io/v1 kind: ValidatingAdmissionPolicy metadata: name: nonroot-no-priv spec: matchConstraints: resourceRules: - apiGroups: ["apps"] apiVersions: ["v1"] resources: ["deployments"] validations: - expression: "object.spec.template.spec.containers.all(c, has(c.securityContext) && c.securityContext.runAsNonRoot == true)" message: "All containers must set runAsNonRoot: true" - expression: "object.spec.template.spec.containers.all(c, !has(c.securityContext) || !has(c.securityContext.allowPrivilegeEscalation) || c.securityContext.allowPrivilegeEscalation == false)" message: "allowPrivilegeEscalation must be false"
- GitOps sync
- Point Argo CD or Flux to manifests/catalog/prod and manifests/catalog/staging.
- Enable app-of-apps or Kustomize in Argo CD only for grouping; keep content immutable.
Now, a single PR shows:
- The intent change in services/catalog/prod.cue
- The generated diff in manifests/catalog/prod/all.yaml
- Policy results (pass/fail) and schema validation
No hand-edited YAML, no surprises.
Choosing Your Path: CUE vs KCL vs Dhall vs CDKs
There’s no single right answer. Consider:
- Team background and ergonomics
- CUE/KCL: familiar to ops/devs who like declarative with constraints; CUE’s unification model is powerful for layering.
- Dhall: rigorous FP shops that value totality and referential transparency.
- CDK8s/Pulumi: teams fluent in TS/Python/Go wanting libraries, IDEs, and unit tests.
- Type checking and constraints
- Dhall: strongest static guarantees; CUE/KCL: practical schema + constraints; CDKs rely on runtime checks unless you add types and validators.
- Ecosystem and tooling
- CUE: strong Kubernetes community, used in policy engines and templating.
- KCL: growing tooling and policy features, focus on cloud-native.
- Dhall: stable, excellent import hygiene.
- CDK8s/Pulumi: rich language ecosystems, testing, packaging.
- GitOps integration
- All can synthesize YAML deterministically. Ensure you lock dependencies and eliminate non-determinism.
Recommended default in 2025 for broad platform teams: Start with CUE or KCL for contracts and generation; use CDK8s selectively for advanced constructs if your platform engineers prefer general-purpose languages. Dhall is an excellent choice for teams already comfortable with typed FP.
Testing, Versioning, and Supply Chain
To make typed config work at scale, treat it like software.
- Tests
- Unit tests for modules: Given inputs, expect outputs (golden files/snapshots).
- Property tests: For random valid inputs, outputs validate against policies and schemas.
- Policy tests: OPA/kyverno unit tests for each rule with positive/negative cases.
- Versioning
- Semantic version modules. Breaking changes require a major bump. Maintain upgrade notes and migration scripts.
- Annotate generated manifests with module and policy versions.
- Supply chain
- Build generators in hermetic containers; pin digests.
- Use checksums for imports (Dhall has this built-in; emulate for CUE/KCL/CDKs).
- Sign generated manifests or Git tags (Sigstore) and record provenance (SLSA attestations).
Observability and Operations
- Metrics to track
- Generation time, cache hit rates
- Policy violation counts by rule
- GitOps reconciliation duration and failure rate
- Change failure rate and MTTR pre/post migration
- Logging and tracing
- Log generator versions and inputs (without secrets). Emit a trace ID from CI to cluster annotations.
- Rollbacks
- Rollbacks are Git revert or Argo CD app rollback. Because outputs are deterministic, rollbacks are reliable.
Common Pitfalls and How to Avoid Them
- Over-abstraction: If developers must open the module to understand what happens, your contract is too magical. Keep contracts explicit and well-documented.
- Environmental snowflakes: Don’t create separate modules per environment. Use layering on a single contract with small overrides.
- Imperative CDK logic: Using loops and ifs is fine; causing side-effects (e.g., fetching secrets from live systems) during synthesis is not. Keep generation pure.
- Inconsistent ownership: Platform owns the modules and policies; app teams own inputs. Make that explicit in CODEOWNERS.
- Mixed truth sources: Consolidate values into typed inputs. Avoid hidden env vars or CI-provided defaults that bypass review.
FAQ
- Can I keep Helm? Yes, as a transition. You can generate Helm values with CUE/KCL/Dhall or wrap Helm charts as modules. But aim to retire complex templates in favor of typed contracts.
- What about Kustomize? Use it sparingly for grouping or last-mile ops; avoid deep overlay stacks once you adopt typed generation.
- Is this slower than YAML? Generation adds milliseconds to seconds in CI. The payback is orders-of-magnitude fewer human hours and fewer rollbacks.
- How do secrets fit? Keep secrets in ExternalSecrets/SealedSecrets or your secret manager. The typed contract should reference secret names/keys, not inline secret data.
Conclusion
YAML isn’t the enemy—untyped, copy–paste YAML is. Typed configuration with CUE, KCL, Dhall, and CDKs lets you encode intent, constraints, and composition directly, then generate clean, deterministic manifests for GitOps. Pair this with policy-as-code and a disciplined CI/GitOps pipeline, and you transform configuration from a source of risk into a force multiplier.
In 2025, the tools and patterns are mature. Start small: add validation, wrap a single service type, adopt generators in CI, and show the before/after diff in review time and incident metrics. Teams that make this move ship faster, sleep better, and stop arguing about indentation.
References and Further Reading
- CUE language: https://cuelang.org/
- KCL language: https://kcl-lang.io/
- Dhall language: https://dhall-lang.org/
- CDK8s: https://cdk8s.io/
- Pulumi Kubernetes: https://www.pulumi.com/registry/packages/kubernetes/
- Open Policy Agent: https://www.openpolicyagent.org/
- Gatekeeper: https://github.com/open-policy-agent/gatekeeper
- Kyverno: https://kyverno.io/
- ValidatingAdmissionPolicy (CEL): https://kubernetes.io/docs/reference/access-authn-authz/validating-admission-policy/
- Argo CD: https://argo-cd.readthedocs.io/
- Flux: https://fluxcd.io/
- kubeconform: https://github.com/yannh/kubeconform
- conftest: https://www.conftest.dev/
- Crossplane compositions: https://www.crossplane.io/
- Carvel ytt: https://carvel.dev/ytt/
- SLSA provenance: https://slsa.dev/