Health and Monitoring
Liveness and Readiness Probes
A Probe is a diagnostic performed periodically by the kubelet on a Container. To perform a diagnostic, the kubelet calls a Handler implemented by the Container. There are three types of handlers:
ExecAction: Executes a specified command inside the Container. The diagnostic is considered successful if the command exits with a status code of 0.
TCPSocketAction: Performs a TCP check against the Container’s IP address on a specified port. The diagnostic is considered successful if the port is open.
HTTPGetAction: Performs an HTTP Get request against the Container’s IP address on a specified port and path. The diagnostic is considered successful if the response has a status code greater than or equal to 200 and less than 400.
The kubelet can optionally perform and react to three kinds of probes on running Containers:
livenessProbe: Indicates whether the Container is running. Runs for the lifetime of the Container.
readinessProbe: Indicates whether the Container is ready to service requests. Only runs at start.
Resources
-
Application Health
A health check periodically performs diagnostics on a running container using any combination of the readiness, liveness, and startup health checks.
-
Virtual Machine Health
Use readiness and liveness probes to detect and handle unhealthy virtual machines (VMs).
-
Container Probes
To perform a diagnostic, the kubelet either executes code within the container, or makes a network request.
-
Configure Probes
Read about how to configure liveness, readiness and startup probes for containers.
References
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: app
image: busybox
command: ["sh", "-c", "echo Hello, Kubernetes! && sleep 3600"]
livenessProbe:
exec:
command: ["echo", "alive"]
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
shareProcessNamespace: true
containers:
- name: app
image: bitnami/nginx
ports:
- containerPort: 8080
livenessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 10
readinessProbe:
httpGet:
path: /
port: 8080
periodSeconds: 10
Container Logging
Application and systems logs can help you understand what is happening inside your cluster. The logs are particularly useful for debugging problems and monitoring cluster activity.
Kubernetes provides no native storage solution for log data, but you can integrate many existing logging solutions into your Kubernetes cluster.
Resources
-
Logs Command
Read about the descriptions and example commands for OpenShift CLI (
oc
) developer commands. -
Cluster Logging
As a cluster administrator, you can deploy logging on an OpenShift Container Platform cluster, and use it to collect and aggregate node system audit logs, application container logs, and infrastructure logs.
-
Logging Collector
The collector collects log data from each node, transforms the data, and forwards it to configured outputs.
-
Logging
Application logs can help you understand what is happening inside your application and are particularly useful for debugging problems and monitoring cluster activity.
References
apiVersion: v1
kind: Pod
metadata:
name: counter
spec:
containers:
- name: count
image: busybox
command:
[
"sh",
"-c",
'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 5; done',
]
Monitoring Applications
To scale an application and provide a reliable service, you need to understand how the application behaves when it is deployed. You can examine application performance in a Kubernetes cluster by examining the containers, pods, services, and the characteristics of the overall cluster. Kubernetes provides detailed information about an application’s resource usage at each of these levels. This information allows you to evaluate your application’s performance and where bottlenecks can be removed to improve overall performance.
Prometheus, a CNCF project, can natively monitor Kubernetes, nodes, and Prometheus itself.
Resources
-
Monitoring Application Health
OpenShift Container Platform applications have a number of options to detect and handle unhealthy containers.
-
Monitoring Resource Usage
You can examine application performance in a Kubernetes cluster by examining the containers, pods, services, and the characteristics of the overall cluster.
-
Resource Metrics
For Kubernetes, the Metrics API offers a basic set of metrics to support automatic scaling and similar use cases.
References
apiVersion: v1
kind: Pod
metadata:
name: 500m
spec:
containers:
- name: app
image: gcr.io/kubernetes-e2e-test-images/resource-consumer:1.4
resources:
requests:
cpu: 700m
memory: 128Mi
- name: busybox-sidecar
image: radial/busyboxplus:curl
command:
[
/bin/sh,
-c,
'until curl localhost:8080/ConsumeCPU -d "millicores=500&durationSec=3600"; do sleep 5; done && sleep 3700',
]
apiVersion: v1
kind: Pod
metadata:
name: 200m
spec:
containers:
- name: app
image: gcr.io/kubernetes-e2e-test-images/resource-consumer:1.4
resources:
requests:
cpu: 300m
memory: 64Mi
- name: busybox-sidecar
image: radial/busyboxplus:curl
command:
[
/bin/sh,
-c,
'until curl localhost:8080/ConsumeCPU -d "millicores=200&durationSec=3600"; do sleep 5; done && sleep 3700',
]
Activities
Task | Description | Link |
---|---|---|
Try It Yourself | ||
Probes | Create some Health & Startup Probes to find what's causing an issue. | Probes |