Application Deployment On Azure Kubernetes Service - Part Three

Introduction 

 
 
 
What we will cover:
  • What is the Kubernetes Service?
  • Create and expose Redis master service
  • Deploy the Redis slaves
  • Deploy the frontend of the application
  • Expose the front-end service
  • What is Azure Load Balancer?
  • Let’s Play with Application
Prerequisites
In the previous article of Application Deployment on AKS part 2, we configured the Redis Master to load configuration data from a ConfigMap. The dynamic configuration of applications using a ConfigMap, we will now return to the deployment of the rest of the guestbook application. You will once again come across the concepts of deployment, ReplicaSets, and Pods for the back end and front end. Apart from this, I will be introduced to another key concept, called a service.
 

What is the Kubernetes Service?

 
A service is a grouping of pods that are running on the cluster. Like a pod, a Kubernetes service is a REST object. A service is both an abstraction that defines a logical set of pods and a policy for accessing the pod set It helps pods to scale very easily. You can have many services within the cluster. Kubernetes services can efficiently power a microservice architecture.
 
To start the complete end to end deployment, we are going to create a service to expose the Redis master service.
 

Create and expose Redis master service

 
When exposing a port in plain Docker, the exposed port is constrained to the host it is running on. With Kubernetes networking, there is network connectivity between different Pods in the cluster. However, Pods themselves are unstable in nature, meaning they can be shut down, restarted, or even moved to other hosts without maintaining their IP address. If you were to connect to the IP of a Pod directly, you might lose connectivity if that Pod was moved to a new host.
 
Kubernetes provides the service object, which handles this exact problem. Using label matching selectors, it proxies traffic to the right Pods and does load balancing. In this case, the master has only one Pod, so it just ensures that the traffic is directed to the Pod independent of the node the Pod runs on. To create the Service, run the following command:
  1. kubectl apply -f redis-master-service.yaml  
 
The Redis master Service has the following content:
  1. apiVersion: v1  
  2. kind: Service  
  3. metadata:  
  4.   name: redis-master  
  5.   labels:  
  6.     app: redis  
  7.     role: master  
  8.     tier: backend  
  9. spec:  
  10.   ports:  
  11.   - port: 6379  
  12.     targetPort: 6379  
  13.   selector:  
  14.     app: redis  
  15.     role: master  
  16.     tier: backend  
Let's now see what you have created using the preceding code:
 
Lines 1-8
 
These lines tell Kubernetes that we want a service called redis-master, which has the same labels as our redis-master server Pod.apiVersion: v1    end   
  1. apiVersion: v1  
  2. kind: Service  
  3. metadata:  
  4.   name: redis-master  
  5.   labels:  
  6.     app: redis  
  7.     role: master  
  8.     tier: backend  
Lines 10-12
 
These lines indicate that the service should handle traffic arriving at port 6379 and forward it to port 6379 of the Pods that match the selector defined between lines 13 and 16.
  1. spec:  
  2.   ports:  
  3.   - port: 6379  
  4.     targetPort: 6379  
Lines 13-16
 
These lines are used to find the Pods to which the incoming traffic needs to be proxied. So, any Pod with labels matching (app: redis, role: master and tier: backend) is expected to handle port 6379 traffic. If you look back at the previous example, those are the exact labels we applied to that deployment.
  1. selector:    
  2.     app: redis    
  3.     role: master    
  4.     tier: backend    
We can check the properties of the service by running the following command:
  1. kubectl get service  
This will give you an output as shown in the below screenshot:
 
 
You see that a new service, named redis-master, has been created. It has a cluster-wide IP of 10.0.183.90 (in your case, the IP will likely be different).
 
Note
This IP will work only within the cluster (hence we called ClusterIP type).
 
Now we expose the Redis master service. Next, let's move to deploy Redis slave.
 

Deploying the Redis slaves

 
Running a single back end on the cloud is not recommended. You can configure Redis in a master-slave setup. This means that you can have a master that will serve write traffic and multiple slaves that can handle read traffic. It is useful for handling increased read traffic and high availability.  
 
The following steps will help us to deploy the Redis slaves:
 
Step 1
 
Create the deployment by running the following command:
  1. kubectl apply -f redis-slave-deployment.yaml  
 
Step 2
 
Let's check all the resources that have been created now:
  1. kubectl get all  
This will give you an output as shown in the screenshot:
 
 
Step 3
 
Based on the preceding output, you can see that you created two replicas of the redis-slave Pods. This can be confirmed by examining the redis-slave-deployment.yaml file:
  1. apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2  
  2. kind: Deployment  
  3. metadata:  
  4.   name: redis-slave  
  5.   labels:  
  6.     app: redis  
  7. spec:  
  8.   selector:  
  9.     matchLabels:  
  10.       app: redis  
  11.       role: slave  
  12.       tier: backend  
  13.   replicas: 2  
  14.   template:  
  15.     metadata:  
  16.       labels:  
  17.         app: redis  
  18.         role: slave  
  19.         tier: backend  
  20.     spec:  
  21.       containers:  
  22.       - name: slave  
  23.         image: gcr.io/google_samples/gb-redisslave:v1  
  24.         resources:  
  25.           requests:  
  26.             cpu: 100m  
  27.             memory: 100Mi  
  28.         env:  
  29.         - name: GET_HOSTS_FROM  
  30.           value: dns  
  31.           # Using `GET_HOSTS_FROM=dns` requires your cluster to  
  32.           # provide a dns service. As of Kubernetes 1.3, DNS is a built-in  
  33.           # service launched automatically. However, if the cluster you are using  
  34.           # does not have a built-in DNS service, you can instead  
  35.           # access an environment variable to find the master  
  36.           # service's host. To do so, comment out the 'value: dns' line above, and  
  37.           # uncomment the line below:  
  38.           # value: env  
  39.         ports:  
  40.         - containerPort: 6379  
Everything is the same except for the following,
 
Line 13
 
The number of replicas is 2.
  1. replicas: 2  
Line 23
 
You are now using a specific slave image.
  1. image: gcr.io/google_samples/gb-redisslave:v1    
Lines 29-30
 
Setting GET_HOSTS_FROM to DNS. As you saw in the previous example, DNS resolves in the cluster.
  1. - name: GET_HOSTS_FROM    
  2.           value: dns   
Step 4
 
Like the master service, you need to expose the slave service by running the following:
  1. kubectl apply -f redis-slave-service.yaml  

The only difference between this service and the redis-master service is that this service proxies traffic to Pods that have the role:slave label. 
 
Step 5
 
Check the redis-slave service by running the following command:
  1. kubectl get service  
This will give you an output as shown in the screenshot below:
 
 
Now we have a Redis cluster up and running, with a single master and two replicas. Let's deploy and expose the front end. 
 

Deploy Front end of the Application

 
From now we have focused on the Redis back end. Now we are ready to deploy the front end. This will add a graphical web page to our application that we will be able to interact with. 
 
Step 1
 
You can create the front end using the following command:
  1. kubectl apply -f frontend-deployment.yaml  
 
Step 2
 
To verify the deployment, run this code:
  1. kubectl get pods  
This will display the output shown in the below screenshot:
 
 
 You will notice that this deployment specifies 3 replicas. The deployment has the usual aspects with minor changes, as shown in the following code:
  1. apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2  
  2. kind: Deployment  
  3. metadata:  
  4.   name: frontend  
  5.   labels:  
  6.     app: guestbook  
  7. spec:  
  8.   selector:  
  9.     matchLabels:  
  10.       app: guestbook  
  11.       tier: frontend  
  12.   replicas: 3  
  13.   template:  
  14.     metadata:  
  15.       labels:  
  16.         app: guestbook  
  17.         tier: frontend  
  18.     spec:  
  19.       containers:  
  20.       - name: php-redis  
  21.         image: gcr.io/google-samples/gb-frontend:v4  
  22.         resources:  
  23.           requests:  
  24.             cpu: 100m  
  25.             memory: 100Mi  
  26.         env:  
  27.         - name: GET_HOSTS_FROM  
  28.           value: dns  
  29.           # Using `GET_HOSTS_FROM=dns` requires your cluster to  
  30.           # provide a dns service. As of Kubernetes 1.3, DNS is a built-in  
  31.           # service launched automatically. However, if the cluster you are using  
  32.           # does not have a built-in DNS service, you can instead  
  33.           # access an environment variable to find the master  
  34.           # service's host. To do so, comment out the 'value: dns' line above, and  
  35.           # uncomment the line below:  
  36.           # value: env  
  37.         ports:  
  38.         - containerPort: 80  
Let's see these changes.
Line 12
 
The replica count is set to 3.
  1. replicas: 3   
Line 9-11 and 15-17
 
The labels are set to app: guestbook and tier: frontend.
  1. matchLabels:    
  2.   app: guestbook    
  3.   tier: frontend    
  4.   
  5.   labels:    
  6.     app: guestbook    
  7.     tier: frontend   
Line 21
 
gb-frontend:v4 is used as an image.
  1. image: gcr.io/google-samples/gb-frontend:v4    
Now we have created the front-end deployment. You now need to expose it as a service
 

Expose the Front-end Service 

 
There are multiple ways to define a Kubernetes service.
 
Cluster IP
 
This default type exposes the service on a cluster-internal IP. You can reach the service only from within the cluster. The two Redis services we created were of the type ClusterIP. This means they are exposed to an IP that is reachable only from the cluster, as shown in the mentioned screenshot.
 
 
Node Port
 
This type of service exposes the service on each node’s IP at a static port. A ClusterIP service is created automatically, and the NodePort service will route to it. From outside the cluster, you can contact the NodePort service by using “<NodeIP>:<NodePort>”.This service would be exposed on a static port on each node as shown in the mentioned screenshot.
 
 
Load Balancer
 
A final type that we will use in our example is the LoadBalancer type. This service type exposes the service externally using the load balancer of your cloud provider. The external load balancer routes to your NodePort and ClusterIP services, which are created automatically. In other words, it will create an Azure load balancer that will get a public IP that we can use to connect to, as shown in the mentioned screenshot. But first, let me explain the Azure Load Balancer.
 
 

What is the Azure Load Balancer?

 
The load balancer is used to distribute the incoming traffic to the pool of virtual machines. It stops routing the traffic to a failed virtual machine in the pool. In this way, we can make our application resilient to any software or hardware failures in that pool of virtual machines. Azure Load Balancer operates at layer four of the Open Systems Interconnection (OSI) model. It's the single point of contact for clients. Load Balancer distributes inbound flows that arrive at the load balancer's front end to backend pool instances.
 
 
Load Balancing
 
Azure load balancer uses a 5-tuple hash composed of source IP, source port, destination IP, destination port, and protocol. We can configure a load balancing role within the load balancer in such a way based on the source port and source IP address from where the traffic is originating
 
Port forwarding
 
The load balancer also has port forwarding capability if we have a pool of web servers, and we don't want to associate public IP address for each web server in that pool. If we're going to carry out any maintenance activities, you need to RDP into those Web servers having a public IP address on that web servers.
 
Application agnostic and transparent
 
The load balancer doesn't directly interact with TCP or UDP or the application layer. We can route the traffic based on URL or multi-site hosting, and then we can go for the application gateway.
 
Automatic reconfiguration
 
Load balancer can reconfigure itself when we scale up or down instances. So, if we are adding more virtual machines into the backend pool, automatically load balancer will reconfigure.
 
Health probes
 
As we discussed earlier, the load balancer can recognize any failed virtual machines in the backend pool and stop routing the traffic to that particular failed virtual machine. It will recognize using health probes we can configure a health probe to determine the health of the instances in the backend pool.
 
Outbound connection
 
All the outbound flows from a private IP address inside our virtual network to public IP addresses on the Internet can be translated to a frontend IP of the load balancer.
 
Now we have to expose the front-end service, the following code will help us to understand how a front-end service is exposed:
  1. apiVersion: v1  
  2. kind: Service  
  3. metadata:  
  4.   name: frontend  
  5.   labels:  
  6.     app: guestbook  
  7.     tier: frontend  
  8. spec:  
  9.   # comment or delete the following line if you want to use a LoadBalancer  
  10.   #type: NodePort  
  11.   # if your cluster supports it, uncomment the following to automatically create  
  12.   # an external load-balanced IP for the frontend service.  
  13.   type: LoadBalancer  
  14.   ports:  
  15.   - port: 80  
  16.   selector:  
  17.     app: guestbook  
  18.     tier: frontend  
Now that you have seen how a front-end service is exposed, let's make the guestbook application ready for use with the following steps.
 
Step 1
 
To create the service, run the following command:
  1. kubectl create -f frontend-service.yaml  
This step takes some time to execute when you run it for the first time. In the background, Azure must perform a couple of actions to make it seamless. It has to create an Azure load balancer and a public IP and set the port-forwarding rules to forward traffic on port 80 to internal ports of the cluster. 
 
 
 
Step 2
 
Run the following until there is a value in the EXTERNAL-IP column,
  1. kubectl get service  
This should display the output shown in the mentioned screenshot.
 
 
Step 3
 
In the Azure portal, if you click on All Resources and filter on the Loadbalancer, you will see a Kubernetes Load balancer. Clicking on it shows you something similar to the attached screenshot. The highlighted sections show you that there is a load balancing rule accepting traffic on port 80 and you have 2 public IP addresses:
 
If you click through on the two public IP addresses, you'll see both IP addresses linked to your cluster. One of those will be the IP address of your actual service; the other one is used by AKS to make outbound connections.
 
 
We're finally ready to put our guestbook app into action!

Let's Play with the Application

 
Type the public IP of the service in your favorite browser. You should get the output shown in the below screenshot.
 
Go ahead and record your messages. They will be saved. Open another browser and type the same IP; you will see all the messages you typed.
 
Congratulations – you have completed your first fully deployed, multi-tier, cloud-native Kubernetes application.
 
To conserve resources on your free-trial virtual machines, it is better to delete the created deployments to run the next round of the deployments by using the following commands,
  1. kubectl delete deployment frontend redis-master redis-slave  
  2. kubectl delete service frontend redis-master redis-slave  

Conclusion

 
Over the three parts of Application deployment on Azure Kubernetes Service, you have deployed a Redis cluster and deployed a publicly accessible web application. You have learned how deployments, ReplicaSets, and Pods are linked, and you have learned how Kubernetes uses the service object to route network traffic.