1. 为什么需要分布式架构

  • 增大系统容量
  • 增强系统可用性

2. 分布式系统技术栈

分布式技术栈是为了增大系统容量、增强系统可用性来服务的,因此主要是完成两件事情:

  • 大流量处理:提高性能,通过集群技术把大规模并发请求的负载分散到不同的机器上。
  • 关键业务保护:提高后台服务的可用性,把故障隔离起来阻止多米诺骨牌效应(雪崩效应)。如果流量过大,需要对业务降级,以保护关键业务流转。

2.1 提高性能

提高性能

  • 缓存系统

[toc]

类(Class)

类定义

1
2
3
4
5
6
class className {

/* All member variables
and member functions*/

}; // 注意,类定义必须以分号结尾

创建对象

1
2
3
4
5
6
7
8
class className {
...
};

int main() {
int i; //integer object
className c; // className object
}

类访问范围

私有:

1
2
3
4
5
6
7
8
9
10
class Class1 {
int num; // 1. 默认时私有的
...
};

class Class2 {
private: // 也可以显示声明私有
int num;
...
};

public & protected

Read more »

1.Explorer Observability

Open the dashboard:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
#!/bin/bash

export MINIKUBE_IP=$(minikube --profile istio-mk ip)

kubectl patch service/grafana -p '{"spec":{"type":"NodePort"}}' -n istio-system
open http://$MINIKUBE_IP:$(kubectl get svc grafana -n istio-system -o 'jsonpath={.spec.ports[0].nodePort}')

kubectl patch service/jaeger-query -p '{"spec":{"type":"NodePort"}}' -n istio-system
open http://$MINIKUBE_IP:$(kubectl get svc jaeger-query -n istio-system -o 'jsonpath={.spec.ports[0].nodePort}')

kubectl patch service/prometheus -p '{"spec":{"type":"NodePort"}}' -n istio-system
open http://$MINIKUBE_IP:$(kubectl get svc prometheus -n istio-system -o 'jsonpath={.spec.ports[0].nodePort}')

kubectl patch service/kiali -p '{"spec":{"type":"NodePort"}}' -n istio-system
open http://$MINIKUBE_IP:$(kubectl get svc kiali -n istio-system -o 'jsonpath={.spec.ports[0].nodePort}')/kiali

Kiali: default username/password: admin/admin

1.1 Grafana

In the grapana. Home->Istio->Istio Workload Dashboard.

Grapha

1.2 Jaeger

Jaeger

Jaeger

Read more »

Understand Microservices architecture requirements and challenges

resource

  • API
  • Discovery
  • Invocation
  • Elasticity
  • Resillience
  • Pipeline
  • Authentication
  • Logging
  • Monitoring
  • Tracing

before Istio

after Istio

The sidecar intercepts all network traffic.

How to add an Istio-Proxy(sidecar)?

istioctl kube-inject -f NormalDeployment.yaml

or

kubectl label namespace myspace istio-injection=enabled

Read more »

Discover CustomResourceDefinitions

Custom Resources extend the API

Custom Controllers provide the functionality - continually maintains the desired state - to monitor its state and reconcile the resource to match with the configuration

https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/

https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/

Custom Resource Definitions (CRDs) in version 1.7

1
2
$ kubectl get crds
$ kubectl api-resources
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: pizzas.mykubernetes.burrsutter.com
labels:
app: pizzamaker
mylabel: stuff
spec:
group: mykubernetes.burrsutter.com
scope: Namespaced
version: v1beta2
names:
kind: Pizza
listKind: PizzaList
plural: pizzas
singular: pizza
shortNames:
- pz
validation:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
toppings:
type: array
sauce:
type: string

Add Pizzas to your Kubernets Cluster

cheese-pizza.yaml

Read more »

Use Service’s Discovery

Say you have a service mynode in yourspace and a service myapp in myspace. If the myapp wants to access the mynode servcie, the url is:

1
mynode.yourspace.svc.cluster.local:8000 # 8000 is the service port, not the node port.

Configure Liveness and Readiness Probes

kubectl scale --replicas=3 deployment xxx

1
2
StrategyType: RollingUpdate
RollingUpdateStrategy: 1max unavailable, 1max surge
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
template:
metadata:
labels:
app: myboot
spec:
containers:
- name: myboot
image: myboot:v1
ports:
- containerPort: 8080
livenessProbe:
httpGet:
port: 8080
path: /
initialDelaySeconds: 10
periodSeconds: 5
timeoutSeconds: 2
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 10
periodSeconds: 3

Once you understand the basics then you can try the advanced demonstration. Where a stateful shopping cart is preserved across a rolling update based on leveraging the readiness probe.

https://github.com/redhat-developer-demos/popular-movie-store

More information on Live & Ready
https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/

Read more »

logs

1
2
3
kubectl get pods
kubectl logs podname -p
kubectl logs podname

exec

1
2
3
kubectl exec -it pod-name /bin/bash
# 查看cgroup配置
cd /sys/fs/cgroup/memory

Constraint CPU & Memory

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: myboot
name: myboot
spec:
replicas: 1
selector:
matchLabels:
app: myboot
template:
metadata:
labels:
app: myboot
spec:
containers:
- name: myboot
image: 9stepsawesome/myboot:v1
ports:
- containerPort: 8080
resources:
requests:
memory: "300Mi"
cpu: "250m" # 1/4 core
limits:
memory: "400Mi"
cpu: "1000m" # 1 core

ConfigMap

First Let’s see the environment.

Change the environment on deployment:

1
kubectl set env deployment/myboot GREETING="hi"
Read more »

End to End

  1. Find a base image: docker.io, quay.io, gcr.io, registry.redhat.io

  2. Craft your Dockerfile

  3. Configure docker: eval $(minikube --profile myprofile decoder-env)

  4. Build your image: docker build -t liulx/myimage:v1 .

    a. Test via:

    - `docker run -it -p 8080:8080 --name myboot liulx/myimage:v1`
    - `docker run -d -p 8080:8080 --name my boot liulx/myboot:v1`
    - `curl $(minikube --profile myprofile ip):8080`

    b. If remote repo, do docker tag and docker push

    c. docker stop containername to stop testing

  5. kubectl apply -f myDeployment.yaml

  6. kubectl apply -f myService.yaml

  7. Expose a URL via your kubernetes distribution’s load-balancer

docker build

1
docker build -t something/animagename:tag

The .indicates where to find the Dockerfile and assets to be included in the build process.

You can alse explicitly identify the Dockerfile:

  • docker build -t somestring/animagename:tag -f somedirectory/Dockerfile_Production .
  • docker build -t somestring/animagename:tag -f somedirectory/Dockerfile_Testing .
  • docker build -f src/main/docker/Dockerfile.native -t mystuff/myimage:v1 .

Builiding Images

Options Include:

  1. docker build then kubectl run or kubectl create -f deploy.yml
  2. Jib - Maven/Gradle plugin by google
  3. Shift maven plugin by Red hat
  4. s2i - source to image
  5. Tekton - pipeline-based image building
  6. No docker: red hat’s podman, Google’s kaniko, Uber’s makisu
  7. Buildpacks - similar to Heroku & Cloud Foundry
Read more »

Setup

OpenShift is Red Hat’s distribution of Kubernetes

minikube and minishift are essentially equivalent and will be used for the demonstrations/examples below.

Prerequisites

  • Docker or
  • Podman
  • brew install kubectx
  • minikube
  • kubectl

Downloads

Downloads & Install Kubectl CLI

1
2
3
4
5
6
# MacOS
$ curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/darwin/amd64/kubectl
#
$ chmod +x kubectl
# or
$ brew install kubernetes-cli

Linux & Windows instructions for finding and downloading the a kubectl https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl

Download & Install Minikube Cluster

Read more »
0%