IT Cloud - страница 25
go_gc_duration_seconds {instance = "localhost: 9090", JOB = "prometheus", quantile = "0"} 0.000009186 go_gc_duration_seconds {instance = "localhost: 9090", JOB = "prometheus", quantile = "0.25"} 0.000012056 = go_congc_ instance "localhost: 9090", JOB = "prometheus", quantile = "0.5"} 0.000023256 go_gc_duration_seconds {instance = "localhost: 9090", JOB = "prometheus", quantile = "0.75"} 0.000068848 go_gc_duration_seconds {instance = "localhost: 9090 ", JOB =" prometheus ", quantile =" 1 "} 0.00021869
by going to the Graph tab – their graphical representation.
Now Prometheus collects metrics from the current node: go_ *, net_ *, process_ *, prometheus_ *, promhttp_ *, scrape_ * and up. To collect metrics from Docker, we tell him to write his metrics in Prometheus on port 9323:
eSSH @ Kubernetes-master: ~ $ curl http: // localhost: 9323 / metrics 2> / dev / null | head -n 20
# HELP builder_builds_failed_total Number of failed image builds
# TYPE builder_builds_failed_total counter
builder_builds_failed_total {reason = "build_canceled"} 0
builder_builds_failed_total {reason = "build_target_not_reachable_error"} 0
builder_builds_failed_total {reason = "command_not_supported_error"} 0
builder_builds_failed_total {reason = "Dockerfile_empty_error"} 0
builder_builds_failed_total {reason = "Dockerfile_syntax_error"} 0
builder_builds_failed_total {reason = "error_processing_commands_error"} 0
builder_builds_failed_total {reason = "missing_onbuild_arguments_error"} 0
builder_builds_failed_total {reason = "unknown_instruction_error"} 0
# HELP builder_builds_triggered_total Number of triggered image builds
# TYPE builder_builds_triggered_total counter
builder_builds_triggered_total 0
# HELP engine_daemon_container_actions_seconds The number of seconds it takes to process each container action
# TYPE engine_daemon_container_actions_seconds histogram
engine_daemon_container_actions_seconds_bucket {action = "changes", le = "0.005"} 1
engine_daemon_container_actions_seconds_bucket {action = "changes", le = "0.01"} 1
engine_daemon_container_actions_seconds_bucket {action = "changes", le = "0.025"} 1
engine_daemon_container_actions_seconds_bucket {action = "changes", le = "0.05"} 1
engine_daemon_container_actions_seconds_bucket {action = "changes", le = "0.1"} 1
In order for the docker daemon to apply the parameters, it must be restarted, which will lead to the fall of all containers, and when the daemon starts, the containers will be raised in accordance with their policy:
essh @ kubernetes-master: ~ $ sudo chmod a + w /etc/docker/daemon.json
essh @ kubernetes-master: ~ $ echo '{"metrics-addr": "127.0.0.1:9323", "experimental": true}' | jq -M -f / dev / null> /etc/docker/daemon.json
essh @ kubernetes-master: ~ $ cat /etc/docker/daemon.json
{
"metrics-addr": "127.0.0.1:9323",
"experimental": true
}
essh @ kubernetes-master: ~ $ systemctl restart docker
Prometheus will only respond to metrics on the same server from different sources. In order for us to collect metrics from different nodes and see the aggregated result, we need to put an agent collecting metrics on each node:
essh @ kubernetes-master: ~ $ docker run -d \
–v "/ proc: / host / proc" \
–v "/ sys: / host / sys" \
–v "/: / rootfs" \
–-net = "host" \
–-name = explorer \
quay.io/prometheus/node-exporter:v0.13.0 \
–collector.procfs / host / proc \