Managing Kubernetes at scale means writing and maintaining hundreds of YAML manifests. As clusters grow, so does the complexity: environment-specific configuration, version tracking, and rollback requirements pile up fast. Helm, the package manager for Kubernetes, solves these problems by templating manifests, packaging them into reusable charts, and tracking every deployment as a versioned release. For IT operations teams responsible for infrastructure reliability, Helm is a critical tool that brings consistency and repeatability to Kubernetes workflows.
What is Helm and why does it matter for Kubernetes?
Helm is an open-source project maintained by the Cloud Native Computing Foundation (CNCF) that acts as a package manager for Kubernetes. It lets you define, install, and upgrade Kubernetes applications using a packaging format called charts.
A Helm chart bundles all the Kubernetes resources your application needs (Deployments, Services, ConfigMaps, Secrets, and more) into a single, versioned package. Instead of applying dozens of individual YAML files with `kubectl apply`, you run one `helm install` command and Helm handles the rest.
Helm v3, released in late 2019, removed the server-side Tiller component that earlier versions required. This was a major architectural change. Helm v3 communicates directly with the Kubernetes API using your existing kubeconfig credentials, which simplifies setup and eliminates the security concerns that came with running Tiller as a cluster-wide privileged pod. If you've seen older tutorials that reference `helm init` or Tiller, those steps no longer apply.
For IT ops teams managing production clusters, Helm provides three core benefits:
Reproducible deployments. The same chart produces the same resources every time, regardless of who runs the install.
Version-controlled releases. Helm tracks every deployment as a release with a revision history, making rollbacks straightforward.
Environment-specific configuration. Values files let you customize deployments for dev, staging, and production without changing the templates themselves.
How does Helm differ from raw Kubernetes manifests?
When you're managing a small number of services, writing raw Kubernetes YAML works fine. Once you scale beyond a handful of microservices or need to deploy across multiple environments, the limitations become clear.
Here is how the two approaches compare:
| Capability | Raw Kubernetes YAML | Helm |
|---|---|---|
| Environment management | Duplicate YAML files per environment or use manual find-and-replace | Override values with `-f staging-values.yaml` or `--set` flags |
| Rollback | Manual process: track previous manifests, reapply them with `kubectl apply` | Built-in: `helm rollback |
| Templating | No native support; requires external tools like Kustomize or envsubst | Go templates built in; conditionals, loops, and helper functions included |
| Configuration reuse | Copy-paste between projects; drift between copies over time | Charts are self-contained packages; share via repositories |
| Repository sharing | No standard distribution mechanism | Helm repositories and OCI registries provide versioned chart distribution |
The tradeoff is complexity. Helm introduces its own templating syntax and packaging conventions, which means a learning curve. For teams managing more than a few services, that investment pays off quickly.
What is a Helm chart and how is it structured?
A Helm chart is a directory with a specific file structure that defines a Kubernetes application. At minimum, a chart contains:
mychart/
Chart.yaml # Chart metadata: name, version, description
values.yaml # Default configuration values
templates/ # Kubernetes manifest templates
deployment.yaml
service.yaml
_helpers.tpl # Reusable template partials
NOTES.txt # Post-install usage instructions
charts/ # Dependency charts (subcharts)
Chart.yaml declares the chart's identity. It includes the chart name, version (the chart package version), and appVersion (the version of the application being deployed). Helm uses semantic versioning for charts, so bumping the version signals changes to anyone consuming the chart.
values.yaml holds the default configuration. Every value defined here can be overridden at install time. This is where you set defaults for replica counts, image tags, resource limits, service types, and any other configurable parameter.
templates/ contains Go-templated Kubernetes manifests. Helm renders these templates by merging them with values at install time. A simple example:
# templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-app
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ .Release.Name }}
template:
spec:
containers:
- name: app
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
resources:
limits:
memory: {{ .Values.resources.limits.memory }}
cpu: {{ .Values.resources.limits.cpu }}
The double-brace syntax (`{{ }}`) pulls values from either the release context (`.Release.Name`) or the values file (`.Values.replicaCount`). This is what makes a single chart work across multiple environments.
How do you install and configure Helm?
Installing Helm v3 is straightforward. There is no server-side component to set up.
On macOS with Homebrew:
brew install helm
On Linux or CI systems:
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
On Windows with Chocolatey:
choco install kubernetes-helm
After installation, verify it's working:
helm version
Helm uses your existing kubeconfig to communicate with your cluster. If `kubectl` can reach your cluster, Helm can too. No additional configuration or initialization is required.
To start using community charts, add a repository:
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
You can search for available charts with:
helm search repo bitnami
How do you create your first Helm chart?
Helm provides a scaffolding command that generates a chart with all the standard files and a working example:
helm create mychart
This creates the directory structure described earlier, pre-populated with templates for a basic nginx deployment. It's a useful starting point for learning the template syntax.
Before deploying, validate your chart:
# Check for syntax errors and best practices
helm lint mychart
# Render templates locally without deploying
helm template mychart
The `helm template` command is particularly valuable. It outputs the final rendered YAML that Helm would send to the cluster, letting you inspect exactly what gets created before anything touches your environment.
When you're ready to deploy:
helm install my-release mychart
Here, `my-release` is the release name (Helm's identifier for this particular deployment) and `mychart` is the chart directory. Helm creates all the Kubernetes resources defined in the templates and begins tracking the release.
To deploy from a repository instead of a local directory:
helm install my-nginx bitnami/nginx
How do you manage values across environments?
Helm's values system is one of its most practical features. You define defaults in `values.yaml` and override them per environment using additional values files or command-line flags.
A typical setup looks like this:
mychart/
values.yaml # Defaults (development)
values-staging.yaml # Staging overrides
values-production.yaml # Production overrides
Your default `values.yaml` might contain:
replicaCount: 1
image:
repository: myapp
tag: latest
resources:
limits:
memory: 256Mi
cpu: 250m
Your `values-production.yaml` overrides only what changes:
replicaCount: 3
image:
tag: v2.1.0
resources:
limits:
memory: 1Gi
cpu: "1"
Deploy to production by specifying the override file:
helm install my-release mychart -f values-production.yaml
You can also set individual values from the command line:
helm install my-release mychart --set image.tag=v2.1.1
The precedence order is: default `values.yaml` is overridden by `-f` files, which are overridden by `--set` flags. This layered approach keeps your base chart clean while allowing per-environment customization.
For IT operations teams managing deployments across dev, staging, and production clusters, this pattern eliminates the need for duplicated manifests. Your chart stays the same; only the values change.
What are Helm releases and how do you manage them?
Every time you run `helm install`, Helm creates a release. A release is a specific instance of a chart running in a cluster, and Helm tracks its full revision history.
List all active releases:
helm list
View the history of a specific release:
helm history my-release
This shows every revision, including the chart version, values used, and status. When a deployment goes wrong, roll back to a previous revision:
helm rollback my-release 2
This restores the release to revision 2. Helm applies the exact same configuration and templates that were used for that revision.
To update a running release with new values or a new chart version:
helm upgrade my-release mychart -f values-production.yaml
If you want to combine install and upgrade into a single command (useful in CI/CD pipelines):
helm upgrade --install my-release mychart -f values-production.yaml
This installs the release if it doesn't exist, or upgrades it if it does.
To remove a release entirely:
helm uninstall my-release
In Helm v3, `uninstall` removes the release and all associated Kubernetes resources. There is no `--purge` flag because purging is the default behavior.
What security practices should you follow with Helm?
Helm charts define infrastructure, so treating them with the same rigor you'd apply to any infrastructure-as-code is essential. Teams responsible for keeping endpoints and infrastructure secure should pay close attention to how charts are sourced, stored, and deployed. Automox helps IT ops teams maintain endpoint security across their fleet, and the same discipline applies to Kubernetes infrastructure managed through Helm.
Verify chart provenance. Helm supports chart signing and verification using GPG keys. When consuming third-party charts, verify signatures to confirm the chart hasn't been tampered with:
helm verify mychart-0.1.0.tgz
helm install --verify my-release mychart
Use OCI registries for chart storage. Helm v3 supports storing charts in OCI-compliant registries (like Docker Hub, Amazon ECR, or GitHub Container Registry). OCI registries provide access controls, audit logging, and versioning that traditional Helm repositories lack:
helm push mychart-0.1.0.tgz oci://registry.example.com/charts
helm install my-release oci://registry.example.com/charts/mychart --version 0.1.0
Scope RBAC appropriately. Since Helm v3 uses your kubeconfig credentials directly, the permissions of whoever runs `helm install` determine what Helm can create. Follow the principle of least privilege: CI/CD service accounts should only have access to the namespaces they deploy to.
Manage secrets carefully. Avoid hardcoding sensitive values in `values.yaml` files or storing them in version control. Use external secrets management tools like HashiCorp Vault, the Kubernetes Secrets Store CSI driver, or sealed-secrets to inject sensitive configuration at deploy time.
Pin chart versions. Always specify exact chart versions in production deployments rather than pulling the latest. This prevents unexpected changes from upstream chart updates:
helm install my-release bitnami/nginx --version 15.3.1
Lint and template before deploying. Run `helm lint` and `helm template` in your CI pipeline to catch errors before they reach a cluster. Combine these with policy tools like OPA Gatekeeper or Kyverno to enforce organizational standards on rendered manifests.
Sources
Helm Documentation - Official Helm v3 docs, including installation, chart development, and best practices.
Kubernetes Documentation - Kubernetes concepts and reference for resources managed by Helm charts.
CNCF Helm Project - Helm's graduated project page on the Cloud Native Computing Foundation site.
Helm Chart Best Practices Guide - Official conventions for chart structure, values, templates, and dependencies.
Artifact Hub - CNCF's hub for finding and publishing Helm charts and other cloud-native packages.
Helm Security Considerations - Documentation on chart integrity and provenance verification.
Frequently asked questions
No. Kustomize, which is built into `kubectl`, offers a template-free approach using overlays and patches. Other tools like Pulumi and CDK for Kubernetes (cdk8s) let you define infrastructure using general-purpose programming languages. Helm stands out for its packaging model, release management, and large ecosystem of community charts.
Helm v2 reached end of life in November 2020 and no longer receives security updates. If you're still running Helm v2, migration is strongly recommended. The Helm project provides a 2to3 plugin that converts Helm v2 releases to the v3 format in place.
A single Helm release targets one namespace by default (specified with `--namespace`). If your application requires resources in multiple namespaces, you can hardcode the namespace in individual templates, though this complicates release management. A cleaner approach is to use separate releases or an umbrella chart with subcharts scoped to different namespaces.
When a `helm install` or `helm upgrade` fails (for example, a pod fails its readiness probe), Helm marks the release revision as "failed" in its history. You can then run `helm rollback` to revert to the last successful revision. Setting the `--atomic` flag on install or upgrade tells Helm to automatically roll back if the deployment doesn't succeed within the specified timeout.
Store values files in version control alongside your chart or in a dedicated deployment repository. Sensitive values (database passwords, API keys, TLS certificates) should not go in values files. Use a secrets management solution to inject those at deploy time. Many teams store non-sensitive values in Git and reference external secrets through tools like the External Secrets Operator.

)
)
)
)
)
)