How large public and private organisations are using teams, kubernetes, vault and git to manage their deployments
This document sets out my observations of recently working in a large public sector environment in the UK managing significant workloads. This is also a good history comparing deployment practices of yesterday to more modern practices seen today
I think there is a significant shift happening between organisations which are adopting these more modern approaches to software development/deployment vs traditional software houses who are leaving themselves behind with old deployment practices.
"Cattle not snowflakes"
This is the idea that we think of services not servers. Losing a server should be seen as a non-issue, workloads should continue to function. You might be thinking, well, how often does a VM or server actually fail? But what you're missing is the cost and time involved in recreating that server, and the speed in which you can replicate the service or deploy a new version.
Practically what this can mean is having a kubernetes cluster upon which services (also known as workloads) are dynamically assigned to machines which have capacity to run those services. You no longer care upon which server your service is running, only that it is. This also means having everything fronted with a load balancer which is ultimately pointing to your Kubernetes cluster. Requests are routed to active nodes running your workloads (see ingresses). You care about the service not the server.
The end result of this is everything needing to be described as code, including infrastructure (subnets, database instances, S3 buckets, access control lists etc). In large environments even the kubernetes cluster creation itself is described in code. For example, using the open source project 'kops'.
At organisation x there was a small team upstairs who monitored service status. This basically involved observing the outcome of Jenkins builds into Kubernetes namespaces. The focus is on investigating deployment rollout failures, before they reach the production namespace, and if there is an outage in production, inspecting the kubernetes cluster logs (e.g. pod logs).
This is in stark contrast to how things used to be done. We used to deploy to individual servers. We would spin up a VM, get an application onto that VM somehow and then update DNS accordingly. Maybe we would put a load balancer in front of it, but the concept of being able to recreate it from scratch after deleting the infrastructure is foreign.
Today, services are routed based on the hostname and path to a load balancer which matches this hostname and path to a service running on a cluster which could be anywhere within the cluster.
Technologies which help with this include kubernetes (for describing deployments) and terraform (for describing infrastructure).
Everything ultimately is binding to a port and sending data over it, we think less about the internals of individual applications and instead just see them as services.
You can delete 'everything' and recreate it later (infrastructure as code)
Except your state.
The point here is to be able to know exactly how, and what configuration the complete infrastructure is running in, and be able to delete it and bring it back up.
Every service, every piece of infrastructure, every configuration, every environment config and every bootstrapped node is available in configuration written to a git repository. This also means over Christmas when not many people are working you can delete services which are not being used, and re-create them when people return for work, at great cost savings. A practical example of how this was being used are developer environments. Developers would be working on applications which required access to databases and other services with mocked data. The first thing they would do in the morning would be to spin up these environments via a Jenkins job. This would result in endpoints they could point to (e.g. a database) running on the cluster in a development namespace. Development teams would share these development namespaces. At the end of the day, someone would be responsible for destroying these environments (also via Jenkins jobs). This saves significant amounts of money. All of this was initiated via Jenkins jobs, which had access to the kubernetes cluster and the vaious namespaces (explained later).
If everything's in code, where do I store my secrets?
A big problem here is secrets. Where do we put them with everything written to git repos? These are managed using a service called Vault by hashicop. Rather than hardcoding sensitive information inside configuration files, instead something called a "vault path" is written in these files which reference the location to a secret on a vault cluster.
There are associated vault policies which dictate which services are allowed to access which secrets, via the use of vault tokens, which expire and need continual renewal. Jenkins comes to the rescue again here, with scheduled jobs to renew vault tokens, which get injected into running containers.
In organisation x there was a support team who's role included updating and maintaining vault policies, again via a git repo. Via tickets, requests were made by teams to be able to read or write to certain vault paths.
With infrastructure as code using terraform, in theory you can delete all of your infrastructure and re-create it. This hinges on the ability to backup the description of your infrastructure in something called the terraform state file. If you lose this there's no going back. Typically this is backed up to an S3 bucket.
Every namespace is a repo, and every environment has a different config
The big picture: There is a Jenkins instance which has access to a kubernetes cluster. Everything is controlled by version control in git repositories. No Jenkins jobs are manually written using the web interface, they are all fetches from a git repo. The kubernetes cluster has multiple namespaces and the Jenkins jobs have build parameters which specify which namespace a job should deploy into.
Seed jobs. The only hardcoded job in Jenkins is a job to fetch all of the jobs from a git repo for a kubernetes cluster. There is one git repo per kubernetes cluster, and inside this repo are all the seed jobs for this cluster. The individual seed jobs specify their kubernetes namespace via build parameters. E.g.
job('abc-application') {
parameters {
string(name: 'NAMESPACE', defaultValue: 'staging', description: 'Kubnernetes namespace to deploy to’')
}
scm {
git('git://github.com/chrisjsimpson/abc-application.git')
}
triggers {
scm('H/15 * * * *')
}
steps {
maven('-e clean test')
kustomize build ~/someApp | kubectl apply --namespace $NAMESPACE -f -
}
}
For example, there might be 500 jobs, but these are all stored in a git repository. There is one job in Jenkins (e.g. "update jobs") which checks-out this jobs repo, and after it completes Jenkins is populated with all of the jobs described in the repo. For completeness, even the single seed job is not manually written, instead it is embedded as an XML document (Jenkins jobs are described in XML under the hood) and during the Jenkins instance creation, ansible script installs Jenkins with this initial job.
Different stages of deployments are deployed into different kubernetes namespaces. In an ideal world these might be named sensibly such as testing, staging, performance testing, audit, production.
The same application progresses through these stages, the only thing that changes is the environment config. These environment configs were all stored in their own git repos.
Consul template & Getting init-containers to do the grunt work
Running containers need access to secrets, and somehow these containers need to have them injected into their environment. Init containers are a perfect use case for this. Init containers run before the app containers in a pod start.
A practical example of this is TLS certificates. you can't simply bake certificates into your container images. Instead, these TLS certificates can be stored in vault, and fetched into the container during initialisation. They are written into a volume mount which the pods share.
The same is true for other secrets, such as passwords. A tool such as consul-template is used to populate files with secrets fetched from Vault.