Deploying Applications the DevOps Way
Deploying Applications the DevOps Way
[TOC]
1. Using the Helm Package Manager
- Helm is used to streamline installing and managing Kubernetes applications.
- Helm consists of the
helm
tool, which needs to be installed, and a chart. - A chart is a Helm package, which contains the following:
- A description of the package
- One or more templates containing Kubernetes manifest files
- Charts can be stored locally, or accessed from remote Helm repositories.
Demo: Installing the Helm Binary
- Fetch the binary from
https://github.com/helm/helm/releases
; check for the latest release! tar xvf helm-xxx.tar.gz
sudo mv linux-amd64/helm /usr/local/bin/
helm version
Getting Access to Helm Charts
The main site for finding Helm charts, is through https://artifacthub.io
This is a major way for finding repository names. We can search for specific software here, and run the commands to install it; for instance, to run the kubernetes dashboard:
1 | # helm repo add stable https://kubernetes-charts.storage.googleapis.com |
Demo: Managing Helm Repositories
1 | helm repo add bitnami https://charts.bitnami.com/bitnami |
2. Working with Helm Charts
Installing Helm Charts
After adding repositories, use helm repo update
to ensure access to the most up-to-date charts.
Use helm install
to install a chart with default parameters.
After installation, use helm list
to list currently installed charts.
Use helm delete
to remove a chart.
Demo: Installing a Helm Chart
1 | # install mysql and generate name for you |
Customizing Before Installing
- A helm chart consists of templates to which specific values are applied.
- The values are specified in the
values.yaml
file, within the helm chart. - The easiest way to customize a helm chart is by first using
helm pull
to fetch a local copy of the helm chart. - Next edit the
chartname/values.yaml
to change any values.
Demo: Customizing a Helm Chart Before Installing
1 | helm show values bitnami/nginx |
3. Using Kustomize
Understanding Kustomize
kustomize
is a kubernetes feature, that use a file with the namekustomization.yaml
to apply changes to a set of resources.- This is convenient for applying changings to input files that the user does not control himeself, and which contents may change because of new versions appearing in Git.
- Use
kubectl apply -k ./
in the directory with thekustomization.yaml
file to apply the changes. - Use
kubectl delete -k ./
in the same directory to delete al that was created by the Kustomization.
Understanding a Sample Kustomization File
1 | resources: # defines which resources (in YAML files) apply |
Using Kustomization Overlays
Kustomization ca be used to define a base configuration, as wel as multiple deployment scenarios (overlays) as in dev, staging and prod for instance.
In such a configuration, the main kustomization.yaml
defines the structure:
1 | ~/someApp |
In each of the overlays/{dev,staging,prod}/kustomization.yaml
, users would reference the base configuration in the resources field, and specify changes for the specific environment:
1 | resources: |
Demo: Using Kustomization
1 | cat deployment.yaml |
1 | # deployment.yaml |
1 | # service.yaml |
1 | # kustomization.yaml |
4. Implementing Blue-Green Deployments
Blue-green deployments are a way to deploy a new version of a service in a cluster, while keeping the old version running. It accomplishes zero downtime application upgrade.
Essential is the possibility to test the new version of the application before taking it into production.
The bule Deployment is the current application version, and the green Deployment is the new version.
Once the green Deployment is ready, the blue Deployment is deleted.
Blue-green deployments can easily be implemented using Kubernetes Services.
Procedure Overview
- Start with the already running application.
- Create a new deployment for the new version of the application, and test with temporary Service resource.
- If all tests pass, remove the temporary Service resource.
- Remove the old Service resource (pointing to the blue Deployment), and immediately create a new Service resource pointing to the green Deployment.
- After successful transition, remove the blue Deployment.
- It is essential to keep the Service name unchanged, so that front-end resources such as Ingress will automatically pick up the transition.
Demo: Blue-Green Deployments
kubectl create deploy blue-nginx --image=nginx:1.14 --replicas=3
kubectl expose deploy blue-nginx --port=80 --target-port=80 --name=bgnginx
kubectl get deploy blue-nginx -o yaml > green-nginx.yaml
- Clean up dynamic generated stuff
- Change Image version
- Change “blue” to “green” throughout
kubectl create -f green-nginx.yaml
kubectl get pods
kubectl delete svc bgnginx; kubectl expose deploy green-nginx --port=80 --target-port=80 --name=bgnginx
kubectl delete deploy blue-nginx
5. Implement Canary Deployments
A canary deployment is an update strategy where you first push the update at small scale to see if it works well.
In terms of Kuberentes, you could image a Deployment that runs 4 replicas.
Next you add a new Deployment that uses the same label.
As the Service is load balancing, only 1 out of 5 requests would be serviced by the new version.
And if that doesnt seem to be working, you can easily delete it.
6. CRD: Custom Resource Definition
1 | #crd-object.yaml |
k apply -f crd-object.yaml
k api-versions | grep backup
k api-resources | grep backup
1 | # crd-backup.yaml |
7. Using Operator
- Operators are custom applications, based on Custom Resource Definitions.
- Operators can be seen as a way of packaging, running and managing applications in Kuberentes.
- Opeartors are based on Controllers, which are Kubernetes components that continuously operate dynamic systems.
- The Controller loop is the essence of any Controllers.
- The Kuberentes Controller manager runs a reconciliation loop, which continuously observes the current state, compares it to the desired state, and adjusts it when necessary.
- Operators are application-specific Controllers.
- Operators can be added to Kubernetes by devloping them yourself.
- Operators are alse available from community websites.
- A common registry for Operators is found at operatorhub.io (which is rather OpenShift oriented).
- Many solutions from the Kuberentes ecosystem are provided as operators:
- Prometheus: a monitoring and alerting solution
- Tigera: the operator that manages the calico network plugin
- Jaeger: a distributed tracing solution
Demo: Installing the Calico Network Plugin
1 | minikube stop; minikube delete |
8. Using StatefulSets
- The main purpose of StatefulSets is to provide a persistent identity to Pods as well as the Pod-specific storage.
- Each Pod in a StatefulSet has a persistent identifier that it keeps across rescheduling.
- StatefulSet provides ordering as well.
- Using StatefulSet is valuable for applications that require any of the following:
- Stable and uniquere network identifiers
- Stable persistent storage
- Ordered deployment and scaling
- Order automated rolling updates
Understanding StatefulSets Limitations
- Storage Provisioning based on StorageClass must be available.
- To ensure data safety, volumes created by the StatefulSet are not deleted while deleting the StatefulSet
- A headless Service is required for StatefulSets
- To guarantee removal of StatefulSet Pods, scale down the number of Pods to 0 before moving the StatefulSet.
Demo: Using a StatefulSet
1 | # sfs.yaml |
k get storageclass
k apply -f sfs.yaml
k get all
StatefulSets don’t use ReplicaSet.