IT Cloud - страница 38



selector:

matchLabels:

run: Nginx

template:

metadata:

labels:

run: Nginx

spec:

containers:

– image: Nginx

name: Nginx

You can also create a template:

gcloud services enable compute.googleapis.com –project = $ {PROJECT}

gcloud beta compute instance-templates create-with-container $ {TEMPLATE} \

–-machine-type = custom-1-4096 \

–-image-family = cos-stable \

–-image-project = cos-cloud \

–-container-image = gcr.io / kuar-demo / kuard-amd64: 1 \

–-container-restart-policy = always \

–-preemptible \

–-region = $ {REGION} \

–-project = $ {PROJECT}

gcloud compute instance-groups managed create $ {TEMPLATE} \

–-base-instance-name = $ {TEMPLATE} \

–-template = $ {TEMPLATE} \

–-size = $ {CLONES} \

–-region = $ {REGION} \

–-project = $ {PROJECT}

High service availability

To ensure high availability, you need to redirect traffic to the spare in the event of an application crash. Also, it is often important that the load is evenly distributed, since the application in a single instance is not able to handle all the traffic. To do this, a cluster is created, for example, let's take a more complex image in order to parse a larger number of nuances:

esschtolts @ cloudshell: ~ / bitrix (essch) $ cat deploymnet.yaml

apiVersion: apps / v1

kind: Deployment

metadata:

name: Nginxlamp

spec:

selector:

matchLabels:

app: lamp

replicas: 1

template:

metadata:

labels:

app: lamp

spec:

containers:

– name: lamp

image: mattrayner / lamp: latest-1604-php5

ports:

– containerPort: 80

esschtolts @ cloudshell: ~ / bitrix (essch) $ cat loadbalancer.yaml

apiVersion: v1

kind: Service

metadata:

name: frontend

spec:

type: LoadBalancer

ports:

– name: front

port: 80

targetPort: 80

selector:

app: lamp

esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl get pods

NAME READY STATUS RESTARTS AGE

Nginxlamp-7fb6fdd47b-jttl8 2/2 Running 0 3m

esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl get svc

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGE

frontend LoadBalancer 10.55.242.137 35.228.73.217 80: 32701 / TCP, 8080: 32568 / TCP 4m

kubernetes ClusterIP 10.55.240.1 none> 443 / TCP 48m

Now we can create identical copies of our clusters, for example, for Production and Develop, but balancing will not work as expected. The balancer will find PODs by label, and PODs in both production and Developer clusters correspond to this label. Also, placing clusters in different projects will not be an obstacle. Although, for many tasks, this is a big plus, but not in the case of a cluster for developers and production. The namespace is used to delimit the scope. We use them discreetly, when we list PODs without specifying a scope, we are displayed by default , but the PODs are not taken out of the system scope:

esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl get namespace

NAME STATUS AGE

default Active 5h

kube-public Active 5h

kube-system Active

esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl get pods –namespace = kube-system

NAME READY STATUS RESTARTS AGE

event-exporter-v0.2.3-85644fcdf-tdt7h 2/2 Running 0 5h

fluentd-gcp-scaler-697b966945-bkqrm 1/1 Running 0 5h

fluentd-gcp-v3.1.0-xgtw9 2/2 Running 0 5h

heapster-v1.6.0-beta.1-5649d6ddc6-p549d 3/3 Running 0 5h

kube-dns-548976df6c-8lvp6 4/4 Running 0 5h

kube-dns-548976df6c-mcctq 4/4 Running 0 5h

kube-dns-autoscaler-67c97c87fb-zzl9w 1/1 Running 0 5h

kube-proxy-gke-bitrix-default-pool-38fa77e9-0wdx 1/1 Running 0 5h

kube-proxy-gke-bitrix-default-pool-38fa77e9-wvrf 1/1 Running 0 5h