Understanding Kubernetes Pod Health Checks: The Three Types of Probes

Shuntaro
6 min readNov 24, 2023

In Kubernetes, there are three types of probes designed for health-checking Pods. This article explains how to use these probes and their differences.

Probes are a crucial feature in Kubernetes, offering user-friendly service delivery that’s indispensable for end-users. At their core, these probes monitor at regular intervals whether a Pod can adequately process traffic and trigger actions like restarts if necessary.

This image was generated using ChatGPT!

ReadinessProbe: Ensuring Service Preparedness

What is ReadinessProbe?

ReadinessProbe is a resource that should be considered when building a Service.

A Service routes traffic to Pods matching its YAML labels without checking the Pod’s status. If a Pod needs time to configure after startup, it may not be ready to handle incoming traffic, leading to errors.

To solve this issue, the ReadinessProbe is used. By configuring a ReadinessProbe in a Pod’s YAML, it can signal the control plane to not route traffic from the Service to the Pod until it is ready.

How to Configure a ReadinessProbe

There are three main patterns for setting up a ReadinessProbe:

  1. HTTP: Checks if the Pod is ready by sending an HTTP request. A response code between 200 and 399 indicates readiness.
  2. Command: Determines readiness by executing a command. If the command returns an exit code of 0, the Pod is ready.
  3. TCP: Checks readiness by attempting a TCP connection. Successful connection establishment signifies readiness.

Let’s look at examples of each:

1.HTTP:

apiVersion: v1
kind: Pod
metadata:
name: nginx-pod-readinessprobe
spec:
containers:
- name: nginx
image: nginx
readinessProbe:
httpGet:
path: /ready
port: 80
initialDelaySeconds: 5
periodSeconds: 5

Here, initialDelaySeconds sets the time to wait before the first health check, and periodSeconds sets the frequency of checks. This configuration sends an HTTP GET request to the /ready endpoint 5 seconds after Pod startup and then every 5 seconds. If the HTTP status code is between 200-399, the Pod is considered ready and starts accepting requests from the Service.

2.Command

apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
spec:
containers:
- name: myapp-container
image: myapp
readinessProbe:
exec:
command:
- cat
- /tmp/ready
initialDelaySeconds: 5
periodSeconds: 5

This pattern specifies a command. If cat /tmp/ready returns an exit code of 0 (indicating the file exists), the Pod is deemed ready. Your app should be programmed to create this file once it's ready.

3.TCP

apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
spec:
containers:
- name: myapp-container
image: myapp
readinessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 5
periodSeconds: 5

In this pattern, you specify a TCP port. The Pod is ready if it can accept connections on TCP port 80. This is useful for applications that don’t use HTTP or can’t place a specific file.

For real-world application development, calling endpoints related to the application’s state is more effective. For example, if an app uses a database, you could call an internal page that opens a MySQL connection to confirm the app can communicate with the database. Rather than a simple page returning 200, a test page that actually attempts a database connection ensures a more robust application.

LivenessProbe: Keeping Your Application Alive

What is LivenessProbe?

LivenessProbe is similar to ReadinessProbe. While ReadinessProbe is used to signal a Pod’s readiness to handle traffic, LivenessProbe is used for ongoing health checks.

Services can’t detect if a Pod is alive or dead and might continue to send traffic to a non-functional Pod.

By setting a LivenessProbe, Kubernetes continuously checks the health of a Pod. If any issues are detected, it restarts the Pod.

How to Configure a LivenessProbe

The configuration patterns are the same as for ReadinessProbe. Let’s look at each setup method.

1.HTTP

apiVersion: v1
kind: Pod
metadata:
name: nginx-pod-livenessprobe
spec:
containers:
- name: nginx
image: nginx
livenessProbe:
httpGet:
path: /health
port: 80
initialDelaySeconds: 15
periodSeconds: 20

This example uses an HTTP GET request for health checks. It sends a request to the /health endpoint 15 seconds after the Pod starts and every 20 seconds thereafter. If the response code is between 200-399, the Pod is considered healthy.

2.Command

apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
spec:
containers:
- name: myapp-container
image: myapp
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 15
periodSeconds: 20

This example executes cat /tmp/healthy. If the exit code is 0 (file exists), the Pod is healthy.

3.TCP

apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
spec:
containers:
- name: myapp-container
image: myapp
livenessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 15
periodSeconds: 20

In this TCP socket example, the Pod’s health is determined by its ability to accept connections on TCP port 80.

StartupProbe: Handling Slow-Starting Applications

What is StartupProbe?

StartupProbe is a relatively new feature, introduced in Kubernetes 1.16 as an alpha version and improved in later versions. It became a stable feature in Kubernetes 1.20.

StartupProbe is designed for checking the health of applications that take a long time to start. Unlike LivenessProbe and ReadinessProbe, StartupProbe is only executed during the initial start of the container.

For applications that take a long time to start, LivenessProbe might prematurely determine the Pod is unhealthy and trigger a restart. StartupProbe solves this problem by temporarily suspending LivenessProbe and ReadinessProbe checks until the application has started and stabilized.

How to Configure a StartupProbe

Again, there are three types:

1.HTTP

apiVersion: v1
kind: Pod
metadata:
name: slow-start-app
spec:
containers:
- name: myapp
image: myapp
startupProbe:
httpGet:
path: /start
port: 8080
failureThreshold: 30
periodSeconds: 10

This example uses an HTTP GET request to check startup status. It sends requests to the /start endpoint every 10 seconds. The failureThreshold is set to 30, meaning the probe will allow up to 30 failures before the Pod is terminated.

2.Command

apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
spec:
containers:
- name: myapp-container
image: myapp
startupProbe:
exec:
command:
- cat
- /tmp/start
failureThreshold: 30
periodSeconds: 10

This example checks startup status by executing cat /tmp/start. The application is considered to be starting if the command returns an exit code of 0. The check occurs every 10 seconds, allowing up to 30 failures.

3.TCP

apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
spec:
containers:
- name: myapp-container
image: myapp
startupProbe:
tcpSocket:
port: 8080
failureThreshold: 30
periodSeconds: 10

In this TCP socket example, the application is considered to be starting if the Pod can accept connections on TCP port 8080. The check occurs every 10 seconds, with a maximum of 30 failures allowed.

Combining the Three Probes Effectively

In Kubernetes, combining ReadinessProbe, LivenessProbe, and StartupProbe enhances reliability by reducing abnormal communications and improves efficiency through automated state monitoring. As explained, each Probe serves a different purpose and function, and their combined use allows for more detailed monitoring and appropriate responses to the operational status of applications.

Roles of the Three Probes

Let’s summarize the roles of the three probes:

  • StartupProbe: Used when an application starts for the first time, particularly useful for slow-starting applications. Until this probe succeeds, LivenessProbe and ReadinessProbe are suspended.
  • LivenessProbe: Regularly checks if the application is running and restarts the Pod if problems are detected.
  • ReadinessProbe: Checks if the application is ready to handle traffic, preventing traffic routing until readiness is established.

Example Configuration

Here’s an example YAML configuration combining the three probes:

apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
spec:
containers:
- name: myapp-container
image: myapp
startupProbe:
httpGet:
path: /startup
port: 8080
failureThreshold: 30
periodSeconds: 10
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5

In the example above:

  • StartupProbe: Monitors the application’s startup by checking the /startup endpoint. It allows up to 30 failures, checking every 10 seconds, for a maximum of 300 seconds.
  • LivenessProbe: Begins 5 seconds after the completion of StartupProbe checks, verifying if the application is operating correctly through the /health endpoint, and restarts the Pod if necessary. Checks are performed every 10 seconds.
  • ReadinessProbe: Starts 5 seconds after the completion of StartupProbe checks, determining if the application is ready to handle traffic through the /ready endpoint. If not ready, it won't accept traffic. Checks are repeated every 5 seconds.

By appropriately combining these three probes, you can ensure that the application starts correctly, operates stably, and handles traffic at the right times.

For the latest versions of Kubernetes, features for grpc-based health checks are also available in alpha. For more details, refer to the official page: https://kubernetes.io/ja/docs/concepts/workloads/pods/pod-lifecycle/#probe-check-methods

jThis image was generated using ChatGPT. Hmm…

Final Thoughts

By leveraging the probe features of Kubernetes as discussed in this article, your applications can become more robust and reliable. In real-world deployments, correctly configuring these probes is key to maintaining service quality and enhancing user experiences. The journey with Kubernetes is never-ending, with each step offering new discoveries and opportunities for improvement. I hope this article serves as a helpful guide in your Kubernetes journey.

--

--