Craft your Apps for Kubernetes
Use Service’s Discovery
Say you have a service mynode
in yourspace
and a service myapp
in myspace
. If the myapp
wants to access the mynode
servcie, the url is:
1 | mynode.yourspace.svc.cluster.local:8000 # 8000 is the service port, not the node port. |
Configure Liveness and Readiness Probes
kubectl scale --replicas=3 deployment xxx
1 | StrategyType: RollingUpdate |
1 | template: |
Once you understand the basics then you can try the advanced demonstration. Where a stateful shopping cart is preserved across a rolling update based on leveraging the readiness probe.
https://github.com/redhat-developer-demos/popular-movie-store
More information on Live & Ready
https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/
Deploy Blue/Green
Description of Blue/Green Deployment
You have the new pod as well as the old ones
1 | $ kubectl get pods |
your client/user is still seeing the old one only
1 | $ curl $(minikube ip):$(kubectl get service/mynode -o jsonpath="{.spec.ports[*].nodePort}") |
Now update the single Service to point to the new pod and go GREEN
1 | $ kubectl patch svc/mynode -p '{"spec":{"selector":{"app":"mynodenew"}}}' |
Note: Our deployment yaml did not have a live & ready probe, things worked out OK here because we waited until long after mynodenew was up and running before flipping the service selector.
Built-In Canary
There are at least two types of deployments that some folks consider “canary deployments” in Kubernetes. The first is simply the rolling update strategy with the health check (liveness probe), if the liveness check fails, it knows to undo the deployment.
Switching back to focusing on myboot and myspace
1 | $ kubectl config set-context --current --namespace=myspace |
Make sure myboot has 2 replicas
1 | $ kubectl scale deployment/myboot --replicas=2 |
and let’s attempt to put some really bad code into production
Go into hello/springboot/MyRESTController.java and add a System.exit(1) into the /health logic
1 | "/health") (method = RequestMethod.GET, value = |
Obviously this sort of thing would never pass through your robust code reviews and automated QA but let’s assume it does.
Build the code
1 | $ mvn clean package |
Build the docker image for v3
1 | $ docker build -t 9stepsawesome/myboot:v3 . |
Terminal 1: Start a poller
1 | while true |
Terminal 2: Watch pods
1 | $ kubectl get pods -w |
Terminal 3: Watch events
1 | $ kubectl get events --sort-by=.metadata.creationTimestamp |
Terminal 4: rollout the v3 update
1 | $ kubectl set image deployment/myboot myboot=9stepsawesome/myboot:v3 |
and watch the fireworks
1 | $ kubectl get pods -w |
Look at your Events
1 | $ kubectl get events -w |
And yet your polling client, stays with the old code & old pod
1 | Aloha from Spring Boot! 133 on myboot-859cbbfb98-4rvl8 |
If you watch a while, the CrashLoopBackOff will continue and the restart count will increment.
Now, go fix the MyRESTController and also change from Hello to Aloha
No more System.exit()
1 | @RequestMapping(method = RequestMethod.GET, value = "/health") |
And change the greeting response to something you recognize.
Save
1 | $ mvn clean package |
and now just wait for the “control loop” to self-correct
Manual Canary with multiple Deployments
Go back to v1
1 | $ kubectl set image deployment/myboot myboot=9stepsawesome/myboot:v1 |
Next, we will use a 2nd Deployment like we did with Blue/Green.
1 | $ kubectl create -f kubefiles/myboot-deployment-canary.yml |
And you can see a new pod being born
1 | $ kubectl get pods |
And this is the v3 one
1 | $ kubectl get pods -l app=mybootcanary |
Now we add a label to both v1 and v3 Deployments PodTemplate, causing new pods to be born
1 | $ kubectl patch deployment/myboot -p '{"spec":{"template":{"metadata":{"labels":{"newstuff":"withCanary"}}}}}' |
Tweak the Service selector for this new label
1 | $ kubectl patch service/myboot -p '{"spec":{"selector":{"newstuff":"withCanary","app": null}}}' |
You should see approximately 30% canary mixed in with previous deployment
1 | Hello from Spring Boot! 23 on myboot-d6c8464-ncpn8 |
You can then manipulate the percentages via the replicas associated with each deployment
20% Aloha (Canary)
1 | $ kubectl scale deployment/myboot --replicas=4 |
The challenge with this model is that you have to have the right pod count to get the right mix. If you want a 1% canary, you need 99 of the non-canary pods.
Istio Cometh
The concept of the Canary rollout gets a lot smarter and more interesting with Istio. You also get the concept of dark launches which allows you to push a change into the production environment, send traffic to the new pod(s) yet no responses are actual sent back to the end-user/client.
Store data with PersistentVolume and PersistentVolumeClaim
1 | apiVersion: v1 |
1 | kind: PersistentVolumeClaim |
1 | apiVersion: apps/v1 |
1 | apiVersion: v1 |