Understanding Application Deployment On Kubernetes Cluster


Before we can go into the deployment of an application onto a Kubernetes cluster, let me give some quick introduction so that we are aware of what happens while deploying an application onto a cluster.

In my last article How To Create An Azure Kubernetes Cluster I had spun up a cluster on the Azure platform, in this cluster, we specified that we want one node as part of the cluster. So, this node will be used to run our containerized applications.

When we run an application in a container of a node in a cluster, it runs in a pod.

POD is the smallest and basic deployment on a cluster. So, when a container needs to run on a node, it'll actually run in a pod. That pod can also be given its own IP address, its own internal IP address.

Normally one pod is made for one container, but you can run multiple containers, as a part of a pod. you could also have multiple pods running, as a particular deployment.

There is concepts of replicas in Kubernetes, In order to load balance the traffic or to achieve high availability of the containers with multiple pods running so that traffic gets distributed to the pods. so, we can specify the number of replicas to run. Each replica consists of a number of pods.

Apart from that, the other advantage of running containers in a Kubernetes cluster is rolling deployments of orchestration Software.

if you want to change the underlying container image, let's say your application running is on its pod in the nodes of a cluster and you made a change to the application. You want to deploy that application's new image and that image needs to go onto the respective container.

This can be done by rolling deployments in the cluster. it'll go and replace the containers in the pod during the deployment.

Apart from that, it also has some self-healing services in case if a pod does fail then automatically the Kubernetes software will try to start the pod again so that your application is always running.

Service Principal

I just want to mention a quick note on what is a service principle because in our use case, we are going to make an azure Kubernetes service that actually pulls out an image from the azure container registry (ACR).

Since Azure Container Registry is a separate service and Azure Kubernetes Service is again a separate service in azure. So normally, when you want to make one service talk to another service in azure, one service needs to be authorized to basically use the other service.

To achieve this, we can have a user defined in Azure and then trying to make use of that user's credentials, which would probably have access to the Azure container registry. But instead of using this sort of method for authorizing the cluster to get an image from the registry.,we can create a service principle. So, this is like another identity that's available in Azure.

The service principal gets an ID that's known as a client ID then assigns a role to the service principle and the service principle could be assigned to your service (azure Kubernetes cluster). This role would then give the required authorization rights to this cluster to go and pull the image from the Azure Container Registry.


We have discussed the basic concept of application deployment in the Azure Kubernetes cluster and Azure container registry. We have also learned the basics of the service principal and how it is used to communicate between two Azure services.