IT Cloud - страница 39



l7-default-backend-5bc54cfb57-6qk4l 1/1 Running 0 5h

metrics-server-v0.2.1-fd596d746-g452c 2/2 Running 0 5h

esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl get pods –namespace = default

NAMEREADY STATUS RESTARTS AGE

Nginxlamp-b5dcb7546-g8j5r 1/1 Running 0 4h

Let's create a scope:

esschtolts @ cloudshell: ~ / bitrix (essch) $ cat namespace.yaml

apiVersion: v1

kind: Namespace

metadata:

name: development

labels:

name: development

esschtolts @ cloudshell: ~ (essch) $ kubectl create -f namespace.yaml

namespace "development" created

esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl get namespace –show-labels

NAME STATUS AGE LABELS

default Active 5h none>

development Active 16m name = development

kube-public Active 5h none>

kube-system Active 5h none>

The essence of working with scope is that for specific clusters we set the scope and we can execute commands specifying it, while they will apply only to them. At the same time, except for the keys in commands such as kubectl get pods I do not appear in the scope, therefore the configuration files of controllers (Deployment, DaemonSet and others) and services (LoadBalancer, NodePort and others) do not appear, allowing them to be seamlessly transferred between the scope, which especially relevant for the development pipeline: developer server, test server, and production server. Scopes are set in the cluster context file $ HOME / .kube / config using the kubectl config view command . So, in my cluster context entry, the scope entry does not appear (default is default ):

– context:

cluster: gke_essch_europe-north1-a_bitrix

user: gke_essch_europe-north1-a_bitrix

name: gke_essch_europe-north1-a_bitrix

You can see something like this:

esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl config view -o jsonpath = '{. contexts [4]}'

{gke_essch_europe-north1-a_bitrix {gke_essch_europe-north1-a_bitrix gke_essch_europe-north1-a_bitrix []}}

Let's create a new context for this user and cluster:

esschtolts @ cloudshell: ~ (essch) $ kubectl config set-context dev \

> –namespace = development \

> –cluster = gke_essch_europe-north1-a_bitrix \

> –user = gke_essch_europe-north1-a_bitrix

Context "dev" modified.

esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl config set-context dev \

> –namespace = development \

> –cluster = gke_essch_europe-north1-a_bitrix \

> –user = gke_essch_europe-north1-a_bitrix

Context "dev" modified.

As a result, the following was added:

– context:

cluster: gke_essch_europe-north1-a_bitrix

namespace: development

user: gke_essch_europe-north1-a_bitrix

name: dev

Now it remains to switch to it:

esschtolts @ cloudshell: ~ (essch) $ kubectl config use-context dev

Switched to context "dev".

esschtolts @ cloudshell: ~ (essch) $ kubectl config current-context

dev

esschtolts @ cloudshell: ~ (essch) $ kubectl get pods

No resources found.

esschtolts @ cloudshell: ~ (essch) $ kubectl get pods –namespace = default

NAMEREADY STATUS RESTARTS AGE

Nginxlamp-b5dcb7546-krkm2 1/1 Running 0 10h

You could add a namespace to the existing context:

esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl config set-context $ (kubectl config current-context) –namespace = development

Context "gke_essch_europe-north1-a_bitrix" modified.

Now create a new cluster in the scope dev (it is now the default, and it can be omitted –namespace = dev ) and removed from the field by default visibility default (it is no longer the default for our cluster, and it is necessary to specify –namespace = default ):