17 minute read

DevOps teams must keep up with the most recent cloud computing concepts and technologies to maintain efficient operations. One such solution is Kubernetes, an open-source platform that automates many application deployment and management features. Kubernetes deployment strategies are methods or techniques to deploy or update applications in Kubernetes. These strategies are designed to minimize downtime and user interruptions during a deployment. The main Kubernetes deployment strategies we will cover in this article are Rolling Update, Blue/Green Deployment, and Canary Release. DevOps teams can use several deployment strategies with Kubernetes to optimize their application deployment workflow.

Have you been using Kubernetes for some time but want to take your use cases to the next level? From rolling updates to blue-green deployments, you can use advanced Kubernetes deployment strategies to enhance efficiency and minimize downtime.

This article examines deployment strategies and discusses best practices for creating and deploying Kubernetes-native applications.

Summary of Kubernetes Deployment Strategies

Advanced Deployment Strategies in Kubernetes

ConceptSummary
Before we star
  • Installing Pre-requisite tools
  • Deploying AWS EKS Cluster
Important concepts related to Kubernetes Deployments
  • What are health checks probes?
  • Readiness Probe
  • Liveness probe
  • Labels and selectors
Rolling updates
  • Default Kubernetes Deployment Strategy
  • Incremental Updates
Canary deployments
  • Deploy updates without disrupting live traffic
Blue-green deployments
  • Deploying new breaking updates to the application

Before we start

To demonstrate Kubernetes Deployment Strategies, we will use Amazon Elastic Kubernetes Service (AWS EKS) to create a Kubernetes Cluster.

To follow this tutorial, you’ll need:

Deploying AWS EKS Cluster

To create an AWS EKS cluster, we will use eksctl tool. To create the EKS resources, create a cluster.yaml file with the below configuration.

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: eks-deployment-strategies
  region: us-east-1

nodeGroups:
  - name: node-group-1
    instanceType: t3.small
    desiredCapacity: 1
    volumeSize: 8

To apply the configuration, execute the command:

> eksctl create cluster -f cluster.yaml

This will create an EKS cluster with a node group consisting of a single node in the us-east-1 region. Once the cluster is ready, you should see an output similar to the one below.

2022-09-05 18:47:47 [✔]  EKS cluster "eks-deployment-strategies" in "us-east-1" region is ready.

We must update the kubeconfig file with newly created cluster access to interact with the cluster. To update the kubeconfig, execute the command.

> aws eks --region us-east-1 update-kubeconfig --name eks-deployment-strategies

To test the cluster access, list the Pods from the default namespace, and execute the command:

> kubectl get pods

No resources found in default namespace.

Kubernetes readiness and liveness probes

Kubernetes health checks help ensure that applications running in containers remain stable by checking their status regularly and taking corrective action when needed (e.g., automatically restarting crashed containers). They are essential for ensuring the reliability, stability, and scalability of applications deployed on a Kubernetes cluster. You can use health checks to monitor applications running inside containers and automatically keep them up-to-date with all their dependencies over time. As such, health checks should always be part of any application deployment process involving Kubernetes clusters!

By understanding what each type of probe does and how they work together, you can ensure your application’s performance is up to standard.

What are Probes, and how do they work?

Kubernetes uses various types of probes to check the health of containers:

  • The HTTP/HTTPS probe sends an HTTP request to the container to check if it is healthy.
  • The TCP probe checks if a port on the container is open and accessible.
  • The Command probe runs a command inside the container to determine its health.
  • The gRPC probe sends an RPC request to the container to check its status.

Depending on how these probes respond (e.g., success/failure), Kubernetes can determine if a given container is ready or alive as needed.

What is a Readiness Probe?

A readiness probe determines if a container is ready to receive requests. The readiness probe sends requests to the container periodically to check whether the container is ready for traffic. If the container does not respond correctly or quickly enough, it will be marked as “unready,” Any incoming requests will be routed elsewhere until it becomes “ready” once again.

What is a Liveness Probe?

A liveness probe detects and checks if an application running inside a container has stopped responding. It sends periodic requests to the application inside the container. If the application fails to respond within a certain amount of time (or if it responds with an error), then the pod containing the application will be restarted for it to become available again. This probe aims to detect application issues before they cause any errors.

Labels and selectors

Kubernetes labels and selectors allow you to identify and address related resources efficiently. For example, you can:

  • Apply labels to resources
  • Organize labeled resources into logical groups,
  • Query labeled resources to retrieve specific objects from the cluster.

This is essential to managing a Kubernetes environment efficiently.

How do labels work in Kubernetes?

Labels are key/value pairs that you can attach to objects such as pods, deployments, services, nodes, or any other resource in a Kubernetes cluster. They act like tags to group related resources for easier management. For example, suppose you have several deployments with different versions of the same application running on your cluster. You assign a label app-version with the appropriate version number for each deployment. Then you can quickly identify which deployments belong to which version of your application without manually inspecting each one.

How do selectors work?

Labels are often used with selectors when creating services or other objects that need access to specific labeled resources within the cluster. For example, if you wanted to create a service that only accessed deployments with the label app-version=1 then you could use a selector in the service definition that specified this label query. This would ensure that only those deployments with the correct label were accessible through this service object.

Strategy #1—Rolling Deployment

The default Kubernetes deployment strategy is the rolling update strategy. When you update the application, new pods gradually replace existing pods as each new version pod becomes available. Kubernetes removes the existing pods from service after replacing them with newer ones, allowing for seamless updates without service interruption.

We use Rolling Deployment when we want to update or upgrade an application with minimal downtime and ensure that users experience minimal disruption. This strategy is suited for applications that must be running continuously, such as web applications, where downtime can adversely affect the user experience.

logo

Comprehensive Kubernetes cost monitoring & optimization

Rolling Deployment example

Let’s demonstrate the rolling deployment behavior in our AKS cluster. First, we will deploy a sample application using the below YAML manifest.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sample-app
spec:
  replicas: 4
  selector:
    matchLabels:
      name: sample-app
  template:
    metadata:
      labels:
        name: sample-app
    spec:
      containers:
      - name: sample-app
        image: rootedmind/kubernetes-deployment-strategies:v1
        imagePullPolicy: Always
        env:
          - name: STARTUP_TIME
            value: "10"
        ports:
          - name: http
            containerPort: 80
        livenessProbe:
          httpGet:
            path: /health
            port: http
          initialDelaySeconds: 15
          timeoutSeconds: 2
          periodSeconds: 5
          successThreshold: 1
          failureThreshold: 3
        readinessProbe:
          httpGet:
            path: /health
            port: http
          initialDelaySeconds: 15
          timeoutSeconds: 1
          periodSeconds: 5
          successThreshold: 1
          failureThreshold: 3
---
apiVersion: v1
kind: Service
metadata:
  name: sample-app-entry-point
spec:
  ports:
  - port: 80
    targetPort: 80
  type: LoadBalancer
  selector:
    name: sample-app
rolling-deployment.yaml

Let’s break down the deployment to understand all the resources.

  • It’s a Deployment object.
  • In the sample-app Pod spec, a single container with the image rootedmind/kubernetes-deployment-strategies:v1 is deployed.
  • The container exposes port 80.
  • There are four replicas of the sample-app pod.
  • readinessProbe and livenessProbe are defined to check the application readiness and liveness status. (In the above deployment, readinessProbe and livenessProbe are testing the same /health endpoint)
  • To access the application externally, a service sample-app-entry-point of the type LoadBalancer is deployed. Note: in order to use a LoadBalancer type Service object in place of an Ingress object for external accessibility, your cluster must run in a supported environment and be configured with the correct cloud load balancer provider package.
  • To apply this deployment, execute:
> kubectl apply -f rolling-deployment.yaml

deployment.apps/sample-app configured
service/sample-app-entry-point configured.

You can verify the pods and service status by executing the below command:

❯ kubectl get pods,service 
NAME                              READY   STATUS    RESTARTS   AGE
pod/sample-app-55f495b6df-7nj9x   1/1     Running   0          83m
pod/sample-app-55f495b6df-mbcnw   1/1     Running   0          83m
pod/sample-app-55f495b6df-ngfm2   1/1     Running   0          84m
pod/sample-app-55f495b6df-v2ptx   1/1     Running   0          84m



NAME                             TYPE           CLUSTER-IP      EXTERNAL-IP                                                               PORT(S)        AGE
service/kubernetes               ClusterIP      10.100.0.1      <none>                                                                    443/TCP        176m
service/sample-app-entry-point   LoadBalancer   10.100.57.130   a98e5d4a8884c44b0a591ccab9861dba-1061870882.us-east-1.elb.amazonaws.com   80:30123/TCP   121m

Next, we can access the application using load balancer DNS:

❯ curl a98e5d4a8884c44b0a591ccab9861dba-1061870882.us-east-1.elb.amazonaws.com

kubernetes-deployment-strategies:v1

We get a successful response indicating the version one application.

Let’s update the application to v2 and observe how default rolling updates occur.

To update the application, update the image tag in the deployment file to v2.

image: rootedmind/kubernetes-deployment-strategies:v2

To apply the new configuration, execute:

❯ kubectl apply -f rolling-deployment.yaml

deployment.apps/sample-app configured
service/sample-app-entry-point unchanged

To check rolling update behavior, we can execute the below two kubectl commands.

❯ kubectl get pod --watch

NAME                          READY   STATUS        RESTARTS   AGE
sample-app-55f495b6df-gs2t5   0/1     Running       0          14s
sample-app-55f495b6df-zldn8   0/1     Running       0          14s
sample-app-88b5d4b4b-4gn2q    1/1     Terminating   0          115s
sample-app-88b5d4b4b-8n2s2    1/1     Running       0          115s
sample-app-88b5d4b4b-jb8s5    1/1     Running       0          2m16s
sample-app-88b5d4b4b-nm48k    1/1     Running       0          2m16s
sample-app-55f495b6df-gs2t5   1/1     Running       0          16s
sample-app-88b5d4b4b-nm48k    1/1     Terminating   0          2m18s
sample-app-55f495b6df-zjzmg   0/1     Pending       0          0s
sample-app-55f495b6df-zjzmg   0/1     Pending       0          0s
sample-app-55f495b6df-zjzmg   0/1     ContainerCreating   0          0s
sample-app-55f495b6df-zjzmg   0/1     Running             0          1s
sample-app-55f495b6df-zldn8   1/1     Running             0          21s
sample-app-88b5d4b4b-jb8s5    1/1     Terminating         0          2m23s
sample-app-55f495b6df-2wbc5   0/1     Pending             0          0s
sample-app-55f495b6df-2wbc5   0/1     Pending             0          0s
❯ kubectl rollout status deploy/sample-app

Waiting for deployment "sample-app" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "sample-app" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "sample-app" rollout to finish: 3 of 4 updated replicas are available...
deployment "sample-app" successfully rolled out

As you can see in the above outputs, v1 pods are gradually terminated as new v2 pods become available, passing health probes.

kubectl get pod --watch: Continuously watches for any pod status update

kubectl rollout status deploy/sample-app: Get deployment status for sample-app deployment

Strategy #2—Canary Deployments

A canary deployment is when a small percentage of users receive the latest update while the rest continue using the existing version. Canary deployments allow you to test out new changes before releasing them to the general public. The deployment strategy aims to ensure that new changes do not disrupt existing traffic or cause any other issues before releasing it widely.

How does Canary Deployment work?

This Kubernetes deployment strategy starts by creating two versions of your website or application: the current and updated versions. The next step is to set up traffic splitting so that most users redirect towards the old version. Kubernetes would then only serve the new version to a small percentage of users. This allows you to monitor how well each version performs under real-world conditions without disrupting service for everyone.

Finally, if everything looks good after proper testing and monitoring, you can roll out the update to the remaining users and decommission the old version(s). If there are issues detected during testing, though, then you have the option to roll back.

K8s clusters handling 10B daily API calls use Kubecost

Learn More

Canary Deployment example

To demonstrate the canary deployments, we deploy a sample application using the below YAML manifest.

First, we deploy a sample canary application using the below YAML manifests.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: v1
spec:
  replicas: 3
  selector:
    matchLabels:
      name: canary-sample-app
      version: 1.0.0
  template:
    metadata:
      labels:
        name: canary-sample-app
        version: 1.0.0
    spec:
      containers:
      - name: canary-sample-app
        image: rootedmind/kubernetes-deployment-strategies:v1
        imagePullPolicy: Always
        env:
          - name: STARTUP_TIME
            value: "10"
        ports:
          - name: http
            containerPort: 80
        livenessProbe:
          httpGet:
            path: /health
            port: http
          initialDelaySeconds: 15
          timeoutSeconds: 2
          periodSeconds: 5
          successThreshold: 1
          failureThreshold: 3
        readinessProbe:
          httpGet:
            path: /health
            port: http
          initialDelaySeconds: 15
          timeoutSeconds: 1
          periodSeconds: 5
          successThreshold: 1
          failureThreshold: 3
canary-deployment-v1.yaml
apiVersion: v1
kind: Service
metadata:
  name: canary-entry-point
spec:
  ports:
  - port: 80
    targetPort: 80
  type: LoadBalancer
  selector:
    version: 1.0.0
canary-deployment-service.yaml

Let’s break down the deployment to understand all the resources.

  • In the canary-sample-app Pod spec, a single container with the image rootedmind/kubernetes-deployment-strategies:v1 is deployed.
  • There are three replicas of the canary-sample-app pod.
  • canary-sample-app deployment is tagged with label version: 1.0.0
  • A service canary-entry-point of the type LoadBalancer is deployed. This service has a selector as version: 1.0.0. Note: in order to use a LoadBalancer type Service object in place of an Ingress object for external accessibility, your cluster must run in a supported environment and be configured with the correct cloud load balancer provider package.

To apply these resources, execute:

❯ kubectl apply -f canary-deployment-v1.yaml 
deployment.apps/v1 created


❯ kubectl apply -f canary-deployment-service.yaml 
service/canary-entry-point created

To get the service access point, execute:

❯ kubectl get service

NAME                     TYPE           CLUSTER-IP      EXTERNAL-IP                                                               PORT(S)        AGE
canary-entry-point       LoadBalancer   10.100.121.45   ad736827a37be4e11bbebc7605a3f28e-1210340537.us-east-1.elb.amazonaws.com   80:31879/TCP   7m53s

Next, if we run the curl command using the above endpoint, we will see the output from version 1 of the application.

❯ curl ad736827a37be4e11bbebc7605a3f28e-1210340537.us-east-1.elb.amazonaws.com

kubernetes-deployment-strategies:v1

Now, let’s deploy the canary deployment using the below YAML manifest.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: v2
spec:
  replicas: 1
  selector:
    matchLabels:
      name: canary-sample-app
      version: 2.0.0
  template:
    metadata:
      labels:
        name: canary-sample-app
        version: 2.0.0
    spec:
      containers:
      - name: canary-sample-app
        image: rootedmind/kubernetes-deployment-strategies:v2
        imagePullPolicy: Always
        env:
          - name: STARTUP_TIME
            value: "10"
        ports:
          - name: http
            containerPort: 80
        livenessProbe:
          httpGet:
            path: /health
            port: http
          initialDelaySeconds: 15
          timeoutSeconds: 2
          periodSeconds: 5
          successThreshold: 1
          failureThreshold: 3
        readinessProbe:
          httpGet:
            path: /health
            port: http
          initialDelaySeconds: 15
          timeoutSeconds: 1
          periodSeconds: 5
          successThreshold: 1
          failureThreshold: 3
canary-deployment-v2.yaml

We have made a few updates in this canary-deployment-v2.yaml.

  • In the canary-sample-app Pod spec, a single container with the image rootedmind/kubernetes-deployment-strategies:v2 is deployed.
  • There is one replica of the canary-sample-app v2 pod.
  • canary-sample-app deployment is tagged with label version: 2.0.0

To apply v2 version, execute:

❯ kubectl apply -f canary-deployment-v2.yaml

deployment.apps/v2 created

We can verify that four Pods are running:

Three Pods run version 1.0.0.
One Pod runs version 2.0.0.

Let’s verify that with:

❯ kubectl get pods --show-labels  

NAME                  READY   STATUS    RESTARTS   AGE    LABELS
v1-68bcb49444-d5rv7   1/1     Running   0          15m    name=canary-sample-app,pod-template-hash=68bcb49444,version=1.0.0
v1-68bcb49444-s9tlv   1/1     Running   0          15m    name=canary-sample-app,pod-template-hash=68bcb49444,version=1.0.0
v1-68bcb49444-wmsp6   1/1     Running   0          15m    name=canary-sample-app,pod-template-hash=68bcb49444,version=1.0.0
v2-79c77f8dd-dfx74    1/1     Running   0          109s   name=canary-sample-app,pod-template-hash=79c77f8dd,version=2.0.0

Now, if we request the service endpoint using curl, we will still get the v1 application response. To split the traffic to the new v2, let’s update the service as below:

apiVersion: v1
kind: Service
metadata:
  name: canary-entry-point
spec:
  ports:
  - port: 80
    targetPort: 80
  type: LoadBalancer
  selector:
    name: canary-sample-app

Note the selector has been updated from version: 1.0.0 to name: canary-sample-app.

To apply this new change, execute:

❯ kubectl apply -f canary-deployment-service.yaml 

service/canary-entry-point configured

The label canary-sample-app is shared amongst all the Pods, allowing for an efficient distribution of traffic. With three replicas running on version 1.0.0 and one canary, you should get the response from the version 1.0.0 application 3 out of 4 times.

We can confirm this using the below curl command:

❯ for i in {1..50};
do
    curl --silent curl ad736827a37be4e11bbebc7605a3f28e-1210340537.us-east-1.elb.amazonaws.com
    sleep 0.5
done

kubernetes-deployment-strategies:v1
kubernetes-deployment-strategies:v1
kubernetes-deployment-strategies:v1
kubernetes-deployment-strategies:v2
kubernetes-deployment-strategies:v1
kubernetes-deployment-strategies:v1
kubernetes-deployment-strategies:v2
kubernetes-deployment-strategies:v1

Learn how to manage K8s costs via the Kubecost APIs

WATCH 30 MIN YOUTUBE VIDEO

Indeed, we can see the traffic split between v1 and v2 based on the number of running pods.

Strategy #3—Blue-Green Deployments

While Canary deployments go in the same environment, blue-green updates go in a separate environment from than currently running applications.

How does blue-green deployment work?

Blue-green deployment involves setting up two identical environments that run in parallel—one “blue” environment and one “green” environment—and switching between them when needed. Each environment runs its own set of services and applications with dedicated resources so you can switch traffic between them quickly and smoothly.

Whenever a new update or feature needs to be rolled out, it is first tested on the “blue” environment before being released into production on the “green” environment. This helps ensure potential issues can be identified and addressed before releasing them into production.

Blue-green deployment example

To demonstrate the Blue-Green deployments, we will deploy a sample application using the below YAML manifests.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: green
spec:
  replicas: 3
  selector:
    matchLabels:
      name: green-sample-app
      version: v1.0.0
  template:
    metadata:
      labels:
        name: green-sample-app
        version: v1.0.0
    spec:
      containers:
      - name: green-sample-app
        image: rootedmind/kubernetes-deployment-strategies:v1
        imagePullPolicy: Always
        env:
          - name: STARTUP_TIME
            value: "10"
        ports:
          - name: http
            containerPort: 80
        livenessProbe:
          httpGet:
            path: /health
            port: http
          initialDelaySeconds: 15
          timeoutSeconds: 2
          periodSeconds: 5
          successThreshold: 1
          failureThreshold: 3
        readinessProbe:
          httpGet:
            path: /health
            port: http
          initialDelaySeconds: 15
          timeoutSeconds: 1
          periodSeconds: 5
          successThreshold: 1
          failureThreshold: 3
green-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: blue
spec:
  replicas: 3
  selector:
    matchLabels:
      name: blue-sample-app
      version: v2.0.0
  template:
    metadata:
      labels:
        name: blue-sample-app
        version: v2.0.0
    spec:
      containers:
      - name: blue-sample-app
        image: rootedmind/kubernetes-deployment-strategies:v2
        imagePullPolicy: Always
        env:
          - name: STARTUP_TIME
            value: "10"
        ports:
          - name: http
            containerPort: 80
        livenessProbe:
          httpGet:
            path: /health
            port: http
          initialDelaySeconds: 15
          timeoutSeconds: 2
          periodSeconds: 5
          successThreshold: 1
          failureThreshold: 3
        readinessProbe:
          httpGet:
            path: /health
            port: http
          initialDelaySeconds: 15
          timeoutSeconds: 1
          periodSeconds: 5
          successThreshold: 1
          failureThreshold: 3
blue-deployment.yaml
apiVersion: v1
kind: Service
metadata:
  name: blue-green-entry-point
spec:
  ports:
  - port: 80
    targetPort: 80
  type: LoadBalancer
  selector:
    version: v1.0.0
blue-green-deployment-service.yaml

Let’s break down the manifests to understand all the resources.

  • green-deployment.yaml deploys an application with version: v1.0.0 with three replicas of image rootedmind/kubernetes-deployment-strategies:v1
  • blue-deployment.yaml deploys an application with version: v2.0.0 with three replicas of image rootedmind/kubernetes-deployment-strategies:v2
  • blue-green-deployment-service.yaml creates a service that routes traffic to pods with label version: v1.0.0. Note: in order to use a LoadBalancer type Service object in place of an Ingress object for external accessibility, your cluster must run in a supported environment and be configured with the correct cloud load balancer provider package.
  • Now, let’s apply these manifests.
❯ kubectl apply -f green-deployment.yaml 
deployment.apps/green created

❯ kubectl apply -f blue-deployment.yaml 
deployment.apps/blue created

❯ kubectl apply -f blue-green-deployment-service.yaml 
service/blue-green-entry-point created

To verify the pods and service creation, execute:

❯ kubectl get pods --show-labels

NAME                    READY   STATUS    RESTARTS   AGE   LABELS
blue-78486fd88-6tch4    1/1     Running   0          29s   name=blue-sample-app,pod-template-hash=78486fd88,version=v2.0.0
blue-78486fd88-776lz    1/1     Running   0          29s   name=blue-sample-app,pod-template-hash=78486fd88,version=v2.0.0
blue-78486fd88-rgnlm    1/1     Running   0          29s   name=blue-sample-app,pod-template-hash=78486fd88,version=v2.0.0
green-87b484d94-jss2w   1/1     Running   0          58s   name=green-sample-app,pod-template-hash=87b484d94,version=v1.0.0
green-87b484d94-nfm7f   1/1     Running   0          58s   name=green-sample-app,pod-template-hash=87b484d94,version=v1.0.0
green-87b484d94-zmdb2   1/1     Running   0          58s   name=green-sample-app,pod-template-hash=87b484d94,version=v1.0.0


❯ kubectl get service

NAME                     TYPE           CLUSTER-IP       EXTERNAL-IP                                                              PORT(S)        AGE
blue-green-entry-point   LoadBalancer   10.100.254.213   a91aa27c2c8444c7992da16e7bcb1d36-165082809.us-east-1.elb.amazonaws.com   80:30843/TCP   2m4

We can send the requests to the service endpoint to observe the response from the application.

❯ curl a91aa27c2c8444c7992da16e7bcb1d36-165082809.us-east-1.elb.amazonaws.com
kubernetes-deployment-strategies:v1

You should consistently get the application responding with v1.0.0. Now, it’s time to transition to v2.0.0.

To switch the traffic to v2.0.0, update the version selector in the blue-green-deployment-service.yaml file.

  selector:
    version: v2.0.0

Now, to update the service, execute:

❯ kubectl apply -f blue-green-deployment-service.yaml

service/blue-green-entry-point configured

Once the service is updated, let’s send the requests using curl.

for i in {1..50};
do
    curl --silent curl a91aa27c2c8444c7992da16e7bcb1d36-165082809.us-east-1.elb.amazonaws.com
    sleep 0.5
done

kubernetes-deployment-strategies:v2
kubernetes-deployment-strategies:v2
kubernetes-deployment-strategies:v2

Indeed, we get the response from the application with v2.0.0.

If the v2.0.0 application is not working as expected, you can easily revert the changes in the service selector and revert to the v1.0.0 application.

Clean Up

To delete all the Kubernetes resources, execute the below command in the directory where you have created the YAML manifests.

> kubectl delete -f .

Finally, to delete the EKS cluster, execute the below command in the directory where you have created cluster.yaml file.

❯ eksctl delete cluster -f cluster.yaml

Kubecost

Kubecost is a cloud optimization platform that provides visibility into Kubernetes infrastructure costs. It helps you to track and monitor spending on Kubernetes workloads and forecast future costs, enabling you to plan your cloud budgets more effectively. Kubecost reduces unnecessary spending and identifies opportunities for cost savings by optimizing the resources allocated to run Kubernetes workloads.

ArgoCD

ArgoCD is an open-source continuous delivery tool that simplifies deploying applications from source code repositories (like GitHub) to Kubernetes clusters. It uses declarative configuration files that define the desired application state and automatically reconciles those configurations with the current application state in Kubernetes. Declarative configurations allow for automated deployment and configuration management across multiple environments, such as staging and production.

Linkerd

Linkerd service mesh is a network communication platform that provides secure, reliable, and high-performance communication between services running in the Kubernetes cluster. Linkerd offers several features that make managing deployments in Kubernetes easier. For example, with its traffic routing capabilities, you can easily route requests from one service to another—even if they run on different clusters or clouds.

logo

Comprehensive Kubernetes cost monitoring & optimization

Conclusion

We have covered various advanced Kubernetes deployment strategies, from the default rolling deployments strategy to advanced strategies such as Canary and Blue-Green deployments, along with practical implementations for each strategy. We also looked at the Kubernetes features for application health checks using health probes and the concept of labels and selectors. You can keep your applications running efficiently by understanding and utilizing advanced Kubernetes deployment strategies.

Additionally, you can minimize downtime and ensure users always have access to the best version of your software. You can also streamline your deployments and ensure they are as effective as possible. So be sure to choose the strategy that best suits your deployment requirements.