Containerizing A .NET Core Application Using Docker, ACS And Kubernetes - Part Four

In the previous part, we learned how to set up a Kubernetes cluster using Azure Container Service, and connect to the cluster using kubectl client.

In this part, we are going to run our .NET Core application in the cluster inside Docker containers. In this exercise, we will be using YAML file to spin up resources in the cluster.

Note - You can use only command line to spin up resources.

So, let’s start with a brief introduction to YAML file and why it is useful.

YAML, which stands for Yet Another Markup Language, is a human-readable text-based format for specifying configuration-type information.

Using YAML for Kubernetes definitions gives you a number of advantages -

  • Convenience
    You’ll no longer have to add all of your parameters to the command line.

  • Maintenance
    YAML files can be added to source control, so you can track changes.

  • Flexibility
    You’ll be able to create much more complex structures using YAML than you can on the command line.

To know more about YAML file and how to use in Kubernetes, please visit this lovely blog here.

We will first create simple Kubernetes pod to run our application in Docker Container.

What a Kubernetes Pod is

As stated in the official Kubernetes documentation, "Pod is the smallest deployable units of computing that can be created and managed in Kubernetes."

A pod (as in a pod of whales or pea pod) is a group of one or more containers (such as - Docker Containers), the shared storage for those containers, and options about how to run the containers. Pods are always co-located and co-scheduled, and run in a shared context. A pod models an application-specific “logical host” — it contains one or more application containers which are relatively tightly coupled — in a pre-container world, they would have executed on the same physical or virtual machine.

To know more about Kubernetes pods in details, please check here.

Now, let’s get started with running our .NET Core application in a pod in the Kubernetes cluster.

Step 1 Create the YAML file

In the root directory of your application, create a YAML file, and you can name it anything. In my case, I created the file named demoservice_pod.yaml.

  1. apiVersion: v1  
  2. kind: Pod  
  3. metadata:  
  4. name: demoservice  
  5. labels:  
  6. name: demoservice  
  7. spec:  
  8. containers:  
  9. - name: demoservice-container  
  10. image: somakdocker/demoservice  
  11. ports:  
  12. - containerPort: 5000  
  13. name: http-server  

In the above YAML file, we are using the kind: Pod which means this will create a pod in the cluster and within the spec, we specify the container image to our docker repo where we pushed the .NET Core WebAPI image in Part-2. In my case, I had pushed the image to somakdocker/demoservice.

Note - Anyone who doesn’t want to use their own image can use somakdocker/demoservice; it’s publicly available in Docker Hub.

And if we remember, the .NET Core WebAPI by default was listening to port 5000, which we are mapping to the container Port 5000.

Quite simple, isn’t it?

Another thing to notice here is the labels field which we have declared as ‘name: dmoservice’. This will help us to identify the pod when we will expose them as service.

Step 2 Create the pod using kubectl

Now, open the command prompt from the path where you created the YAML file. In my case, it is the root directory where I kept my source file, but it can be any other location in your case.

Remember, for this exercise, we will be using the same machine where you have setup the kubectl client and connected to our cluster.

  1. >> Kubectl create –f demoservice_pod.yaml  
  2. >> kubectl get pods  

The second command will display the list of all the pods in the cluster.



demoservice

So, we see that the pod named demoservice has been created, and is running.

Step 3 Exposing the pod using Kubernetes service

Until this point, we have created a pod and our .NET Core application which is running inside the pod in a docker container. Now, we need some mechanism that would let the pod to be exposed to the outside internet. The Kubernetes services does the exact thing for us.

What is a Kubernetes Service?

A Kubernetes Service is an abstraction which defines a logical set of  pods and a policy by which to access them - sometimes called a micro-service. The set of Pods targeted by a Service is (usually) determined by a Label Selector.

A Service in Kubernetes is a REST object, similar to a Pod. Like all of the REST objects, a Service definition can be Posted to the API Server to create a new instance.

To learn more about Kubernetes Services in details, please visit here.

Let us create another YAML file to add a Kubernets Service on top of the pod that we have just created.

  1. apiVersion: v1  
  2. kind: Service  
  3. metadata:  
  4. name: service1  
  5. labels:  
  6. name: service1  
  7. spec:  
  8. type: LoadBalancer  
  9. ports:  
  10. - port: 80  
  11. targetPort: 5000  
  12. protocol: TCP  
  13. selector:  
  14. name: demoservice  

In my case, I created the file named demoservice_svc.yaml. Now, the most important part over here is the selector file. If you notice, we have specified the selector: name: demoservice

So what this will do is search all the pods with lable: name: demoservice, and expose them as a Kubernetes Service of type loadbalancer.

This is one of the methods to expose Kubernetes pods as external service. For more information, pleae check here.

  1. >> kubectl create –f demoservice_svc.yaml  
  2. >> kubectl get services  
service list

It takes a couple of minutes to acquire the External IP for the service.

Once it is done to kubectl get services once more copy paste the External-IP address for the service1 that we have created and hit that in the browser to see the result.

In my case my service1 was allocated the below IP addresses which in your case will be some other value.

And, we get the exact same result that we were expecting. Pretty cool isn’t it!!

But we are not done yet. This is the simplest way to run your application inside kubernetes cluster and it's ok for testing your application, but it’s not at all recommended for production or other scenarios where you want your application to be available to your client at all times.

Because Kubernetes pods are mortal, they are born and when they die, they are not resurrected, so your application will not be available to your users when your pod dies. This is not expected right?

ReplicationController is kubernetes’s answer to the above problem. ReplicationControllers in particular maintains the lifecycle of Pods. They creates and destroys pods dynamically.

What is a ReplicationController?

A ReplicationController ensures that a specified number of pod “replicas” are running at any one time. In other words, a ReplicationController makes sure that a pod or homogeneous set of pods are always up and available. If there are too many pods, it will kill some. If there are too few, the ReplicationController will start more. Unlike manually created pods, the pods maintained by a ReplicationController are automatically replaced if they fail, get deleted, or are terminated.

To know more about ReplicationController please visit here.

Step 4 Creating YAML file for ReplicationController

Now, let's see how to create a ReplicationController for our application. In the folder where you have created the YAML files earlier, create another YAML file; In my case, I have named it demoservice_rc.yaml with the below content.

  1. apiVersion: v1  
  2. kind: ReplicationController  
  3. metadata:  
  4. labels:  
  5. name: demoservice-rc  
  6. name: demoservice-rc  
  7. spec:  
  8. replicas: 3  
  9. selector:  
  10. name: demoservice  
  11. template:  
  12. metadata:  
  13. labels:  
  14. name: demoservice  
  15. spec:  
  16. containers:  
  17. - name: demoservice-container  
  18. image: somakdocker/demoservice  
  19. ports:  
  20. - containerPort: 5000  
  21. name: http-server  

So here, we have defined the type as ReplicationController, named it demoservice-rc and told to maintain number of replicas = 3. Which you can specify any value according to your requirement.

Now, the part after the template looks familiar. right? It’s just the pod declaration that we created earlier and our ReplicationController is selecting the pods with labels: ‘name: demoservice’.

Step 5 Create the ReplicatonController using kubectl.

Before we proceed to create the ReplicationController, let's delete the pod and the service that we have created above.

  1. >> kubectl delete –f demoservice_svc.yaml  
  2. >> kubectl delete -f demoservice_pod.yaml  
Note You can also use Delete command to delete each resource individually by their name. 

After you have successfully removed the previous resources that we have created check once by doing a kubectl get services and kubectl get poods, now we can proceed with the creation of the ReplicationController.

  1. >> kubectl create -f demoservice_rc.yaml  

You can check the deployment using kubectl get rc

And also if we do a kubectl get pods we will see 3 pods created since we have mentioned replicas = 3.


ReplicationController and pods

So what advantage do we get using ReplicationController over plain simple Pods? To check that simply delete any of the 3 Pods that were created by the demoservice-rc and see the magic.

demoservice-rc will immediately spin up a new node since it was told to maintain a replicas equal to 3.

To get a list of pods with label information use the below command.

  1. kubectl get pods — show-labels  
pods with labels

Step 6 Create the kubernetes service

The best part is we can use the same service declaration to expose these pods as service to the internet.

  1. apiVersion: v1  
  2. kind: Service  
  3. metadata:  
  4. name: service1  
  5. labels:  
  6. name: service1  
  7. spec:  
  8. type: LoadBalancer  
  9. ports:  
  10. - port: 80  
  11. targetPort: 5000  
  12. protocol: TCP  
  13. selector:  
  14. name: demoservice  

This is because if you notice we used the same label for all the three Pods that were created by the ReplicationController. Thus like I mentioned earlier, the above service will expose all the pods that contain the label ‘name:demoservice’ as load balancer to the internet.

To create the service use the same command.

  1. >> kubectl create -f demoservice_svc.yaml  
  2. >> kubectl describe services service1  
service description

So now we have 3 pods running our .NET core webapi , a loadbalancer service exposing them to the internet and a ReplicationController to maintain the lifecycle of the pods i.e creating the pods dynamically.

Note

For production scenarios, there are Kubernetes deployments which is the next generation ReplicationController. It gives us several advantages on top of ReplicationController. Since this post is intended to be a quick start, I will not go in detail about deployments, but it works in the same concept as ReplicationController with few added advantages. If you want to know more about Kubernetes deployments, please check here.

With this, we come to an end of this hands on. I hope you guys find these articles useful. Do share these articles with your friend and colleagues who ever wants to get a head start with .NET core, docker and kubernetes.

For any queries or suggestions, please comment below and I will be happy to answer them.

Till then, happy exploring!