10 minute read

When developing applications running in a public cloud, containerized environments have different considerations than regular, “bare-metal” monoliths. One of these considerations is optimizing for quick container startup time, which gives the service the ability to scale in response to load quickly, minimizes idle resources, enables faster self-healing, and more. This requirement for quick startup time leads to various questions, such as how to know when a container has completed startup or when it is ready to accept traffic.

Enter Kubernetes probes, of which there are three types: liveness, readiness, and startup. This article focuses on readiness probes, which are tests deployed with your container that will stop forwarding any traffic to the pod on failure. The pod will continue to run and not serve traffic until the readiness probe succeeds.

Summary of Kubernetes probe types

Before we dive into readiness probes, let’s review the different types of Kubernetes probes for context. Each type of probe has specific use cases, and it’s essential to understand the differences between them to use them effectively.

Type of ProbeDescriptionUse Cases
Liveness A liveness probe is used to determine if a container is still running. If the probe fails, the container is restarted.
  • Detecting hung processes
  • Monitoring resource usage
  • Handling external dependencies
  • Detecting software bugs
  • Upgrading software
  • Maintaining availability
Readiness A readiness probe is used to determine if a container is ready to receive traffic. If the probe fails, the container is not sent traffic until it becomes ready again.
  • Databases
  • Web applications
  • Microservices
  • Load balancers
  • Continuous deployments
Startup A startup probe is used to determine if a container has started successfully. If the probe fails, the pod is not created.
  • Waiting for external dependencies
  • Initializing databases
  • Loading large data sets
  • Waiting for network configuration
  • Running setup scripts

What are readiness probes?

Readiness probes are a feature in Kubernetes that allow you to determine if a container is ready to receive traffic. In the event of a failed readiness probe, the kubelet continues running the container that failed the checks and runs more probes; because the check failed, the kubelet sets the Ready condition on the pod to false.

Readiness probe types

There are two types of readiness probes in Kubernetes:

  • HTTP probes send an HTTP request to a specified endpoint within the container and expect a response with a status code in the 200-400 range. The endpoint, interval, timeout, and failure threshold can be configured.
  • Command probes run a command inside the container and expect a zero exit status to indicate that the container is ready. Here, the command, interval, timeout, and failure threshold can be configured.
  • TCP probes send a TCP check on a specific port, which is successful if open.
  • gRPC probes perform a remote procedure call using gRPC.

As mentioned above, you can configure the following options for each probe:

  • Interval: The time between consecutive probes
  • Timeout: The time to wait for a response before marking the probe as failed
  • Failure threshold: The number of consecutive failures required before marking the container as not ready.

Using these configuration options, you can fine-tune your readiness probes to reflect your containers’ readiness accurately. With the use of readiness probes, you can ensure that traffic is only directed to containers that are fully initialized and ready to handle it.

Common gotchas

Watch out for these common issues when using readiness probes:

  • Misusing readiness probes may cause excessive resource consumption due to the number of processes in the container.
  • When using liveness probes and readiness probes together, one must remember that liveness probes don’t wait for readiness probes to execute. The Kubernetes probe documentation shows how to configure initialDelaySeconds to avoid this behavior.

Example use cases for a Kubernetes readiness probe

The following are some typical use cases for Kubernetes readiness probes:

  • Databases: If you have a database in a container, you can use a readiness probe to check if it’s fully initialized and ready to receive traffic before sending requests.
  • Web applications: A readiness probe can be used to check if a web application is fully loaded and ready to handle requests. This helps ensure that users are not directed to broken or incomplete pages.
  • Microservices: If you have a microservices architecture, you can use a readiness probe to determine if each microservice is ready to receive requests from other services before sending traffic to it.
  • Load balancers: When using a load balancer with a set of pods, a readiness probe can determine if a pod is ready to receive traffic before directing traffic to it.
  • Continuous deployments: If you’re deploying a new version of an application, a readiness probe can be used to confirm that the new version is fully initialized and ready to handle traffic before directing it to the latest version.

Examples

HTTP readiness probe for a web server

apiVersion: v1
kind: Pod
metadata:
  name: web-server-pod
spec:
  containers:
  - name: web-server
    image: nginx:latest
    readinessProbe:
      httpGet:
        path: /
        port: 80
      initialDelaySeconds: 5
      periodSeconds: 5
      timeoutSeconds: 1
      successThreshold: 1
      failureThreshold: 3

In this example, the web server is a container running the Nginx web server. The readiness probe periodically sends an HTTP GET request to the “/” endpoint on port 80 and expects a response with a status code in the 200-400 range. The probe waits 5 seconds before starting, runs every 5 seconds, and has a timeout of 1 second. If the probe fails three times in a row, the container is considered not ready.

logo

Comprehensive Kubernetes cost monitoring & optimization

Command readiness probe for a database

apiVersion: v1
kind: Pod
metadata:
  name: database-pod
spec:
  containers:
  - name: database
    image: postgres:latest
    readinessProbe:
      exec:
        command:
        - psql
        - -U
        - postgres
        - -c
        - 'select 1'
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 5
      successThreshold: 1
      failureThreshold: 3

In this example, the database is a container running the Postgres database. The readiness probe runs a command using psql to execute a SQL query that selects the value 1. The probe waits 10 seconds before starting, runs every 10 seconds, and has a timeout of 5 seconds. If the probe returns a non-zero exit status three times in a row, the container will be considered not ready.

Readiness probe for a Kafka producer application

apiVersion: v1
kind: Pod
metadata:
  name: kafka-producer-pod
spec:
  containers:
  - name: kafka-producer
    image: confluentinc/cp-kafka-producer:5.5.0
    readinessProbe:
      tcpSocket:
        port: 9092
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 5
      successThreshold: 1
      failureThreshold: 3

In this example, the Kafka producer is a container running the Confluent Kafka producer. This readiness probe helps ensure that the Kafka producer only sends messages to the Kafka broker when the broker is available and ready to receive messages.

The readiness probe uses a TCP socket probe to check the availability of the Kafka broker by sending a connection request to port 9092. The probe waits 10 seconds before starting, runs every 10 seconds, and has a timeout of 5 seconds. If the probe fails three times in a row, the container will be considered not ready.

TCP readiness probe for a web server

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-webserver
spec:
  replicas: 3
  selector:
    matchLabels:
      app: webserver
  template:
    metadata:
      labels:
    	  app: webserver
    spec:
  	containers:
        - name: my-webserver-container
          image: my-webserver-image:latest
          ports:
            - containerPort: 80
          readinessProbe:
            tcpSocket:
          	   port: 80
        	 initialDelaySeconds: 10
        	 periodSeconds: 5

In this example, the web server is made up of three replica pods. The readiness probe periodically sends a TCP check against port 80 of the container every 5 seconds, starting 10 seconds after the container is started. This means that Kubernetes will only direct traffic to the container once it has confirmed that the container is listening on port 80 and ready to handle requests.

K8s clusters handling 10B daily API calls use Kubecost

Learn More

gRPC readiness probe for a microservice

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-grpc-service
spec:
  replicas: 3
  selector:
	matchLabels:
  	  app: grpc-service
  template:
	metadata:
  	labels:
    	  app: grpc-service
	spec:
  	containers:
       - name: my-grpc-container
         image: my-grpc-image:latest
         ports:
        	- containerPort: 50051
      	readinessProbe:
             exec:
          	    command:
            	- /bin/grpc_health_probe
            	- -addr=:50051
        	  initialDelaySeconds: 10
        	  periodSeconds: 5

In this example, the gRPC readiness probe is configured to perform an exec check using the grpc_health_probe command-line tool, which is commonly used to check the health of gRPC services. The probe is configured to run every 5 seconds, starting 10 seconds after the container is started. This ensures that traffic is only directed to our gRPC microservice when it’s ready to handle it

Kubernetes readiness probe tutorial

Prerequisites:

  • Minikube
  • Kubectl
  • curl

Learn how to manage K8s costs via the Kubecost APIs

WATCH 30 MIN YOUTUBE VIDEO

This example shows how a readiness probe can fail and how it affects the pod’s availability to receive traffic.

1.Start a local minikube kubernetes cluster:

minikube start

2.Create a basic deployment:

kubectl create deployment my-app --image=nginx

3.Create a readiness probe:

kubectl expose deployment my-app --port=80 --target-port=80
kubectl patch deployment my-app -p '{"spec":{"template":{"spec":{"containers":[{"name":"my-app","image":"nginx","readinessProbe":{"httpGet":{"path":"/", "port":80}}}]}}}}'`

4.If you prefer to apply from a manifest file, save the following to a file called “readiness-probe.yaml”:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
    	  name: my-app
	spec:
  	containers:
  	- name: nginx
    	image: nginx
    	readinessProbe:
      httpGet:
        	path: /
        	port: 80
      	initialDelaySeconds: 10
      	periodSeconds: 5

5.Then apply it with:

kubectl apply -f readiness-probe.yaml

6.Verify the readiness probe:

kubectl describe pod my-app

It should output the following:

Readiness Probe:
  HTTP Get http://10.1.0.43:80/ delay=0s timeout=1s period=10s #success=1 #failure=3

7.Access the app:

minikube service my-app --url

See the output:

http://192.168.39.xx:port

8.See the pod status:

Kubectl get pods

9.Now, let’s edit the readiness probe:

kubectl patch deployment my-app -p '{"spec":{"template":{"spec":{"containers":[{"name":"my-app","readinessProbe":{"httpGet":{"path":"/", "port":80, "timeoutSeconds":1}}}]}}}}'

10.Let’s also edit the container to make it fail the readiness probe:

kubectl edit deployment my-app

In the YAML file, change the command to sleep for a longer duration than the timeout:

command:
  - "sh"
  - "-c"
  - "sleep 30"

11.Save the changes and close the file.

12.Verify the status of the pod:

kubectl describe pod my-app

The output should show the status of the readiness probe:

Readiness Probe Failed: HTTP probe failed with statuscode: 504

The pod will be in a not-ready state and will not receive traffic until it becomes ready again.

Best practices

By following these best practices, you can ensure that your readiness probes effectively detect and address issues with your containers.

Use a meaningful endpoint

The endpoint used by the readiness probe should be representative of the overall readiness of the container. For example, more than a simple HTTP GET request to the root endpoint may be needed to determine if the application is fully loaded and ready to handle requests.

Choose appropriate probe parameters

The interval, timeout, and failure threshold should be set appropriately for your use case. You want the interval to be frequent enough to detect failures quickly, the timeout to be long enough for the probe to complete, and the failure threshold to be high enough to tolerate some failures before marking the container as not ready.

Use liveness probes in conjunction with readiness probes

If a container is not responding to a readiness probe, it may indicate that the container is not running or is in a degraded state. In this case, if necessary, you should use a liveness probe to detect and restart the container.

Monitor probe results

Regularly monitor the results of your readiness probes to detect issues early and identify any potential performance bottlenecks.

Avoid overloading containers

Be careful not to overload containers with traffic while they are still initializing. This can cause the readiness probe to fail and may result in the container being marked as not ready.

Test probes in various scenarios

Regularly test your probes in different scenarios and ensure that the readiness probe represents the overall system, not just the individual container.

logo

Comprehensive Kubernetes cost monitoring & optimization

Conclusion

By now, you should have a clear understanding of the benefits of using readiness probes in Kubernetes. Additionally, you have seen through examples that setting them up is not complicated, even though there are some common gotchas to be aware of. Armed with this knowledge, you can confidently implement readiness probes in your Kubernetes clusters to ensure that your applications are performing optimally and reliably.

Tags:

Categories:

Updated: