Application Deployment On Azure Kubernetes Service

Application Deployment On Azure Kubernetes Service
 
This tutorial shows you how to build and deploy a simple, multi-tier web application using Azure Kubernetes Service and Docker with Redis. It is a three-part series, so let's begin with part one.
 
What we will cover,
  • Introduction of the application
  • Redis Master and Redis Slave Architecture
  •  Deployment of Redis Master
  •  Explore the Redis Master Deployment Explore
Prerequisites
  • Azure Subscription Account
  • AKS Cluster
  • Basic Knowledge of Kubernetes Concepts if not please read this article
  • Basic Knowledge of YAML

Introduction of the application

 
The application that we are going to deploy is to record all the comments, opinions, and suggestions of all the people who visit your hotel and restaurant. Hence we named it Guestbook. The sample guestbook application is a simple, multi-tier web application.
 
The different tiers in this application will have multiple instances. This is useful for both high availability and for scale. The front end will be deployed using multiple replicas.
 
The guestbook's front end is a stateless application because the front end doesn't store any state. The Redis cluster in the back end is stateful as it stores all the guestbook entries. The application uses Redis for its data storage. Redis is an in-memory key-value database. Redis is most often used as a cache.
 
We will begin deploying this application by deploying the Redis master. But first I will give you an overview of Redis cluster master and slave architecture.
 

Redis: Redis Master and Slave Architecture

 
Redis Cluster is a distributed implementation of Redis, Redis Clustering provides a consistent and resilient data service where data is automatically sharded (Partitions data) across multiple Redis nodes (Automatically split your dataset among multiple nodes). And it provides a master/slave setup for enhancing availability in case of a failure. Redis is based on Master-Slave Architecture.
 
Redis server can be run in two modes,
  • Master Mode (Redis Master)
  • Slave Mode (Redis Slave or Redis Replica)
We can configure which mode to write and read from. It is recommended to serve writes through Redis Master and reads through Redis Slaves. Redis Master does replicate writes to one or more Redis Slaves. The master-slave replication is done asynchronously.
 
Deployment Azure Kubernetes Service cluster
 

Deployment of Redis Master

 
Now that you understand what Redis master and Redis slave are and how they work, let's deploy the Redis master. You will learn about the YAML syntax that is required for this deployment. Let's start by deploying the Redis master.
 
Perform the following steps to complete the task,
 
Open your friendly Cloud Shell, as highlighted,
 
Deployment Azure Kubernetes Service cluster

Clone the GitHub repository using the following command.  have placed all the files  there,
 
git clone https://github.com/RumeelHussain/Azure-K8s
cd Deployment
 
Enter the following command to deploy the master,
 
kubectl apply -f redis-master-deployment.yaml
 
Application Deployment On Azure Kubernetes Service
 
It will take some time for the application to download and start running. While you wait, let's understand the command you just typed and executed. Let's start by exploring the content of the YAML file that was used,
  1. apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2  
  2. kind: Deployment  
  3. metadata:  
  4.   name: redis-master  
  5.   labels:  
  6.     app: redis  
  7. spec:  
  8.   selector:  
  9.     matchLabels:  
  10.       app: redis  
  11.       role: master  
  12.       tier: backend  
  13.   replicas: 1  
  14.   template:  
  15.     metadata:  
  16.       labels:  
  17.         app: redis  
  18.         role: master  
  19.         tier: backend  
  20.     spec:  
  21.       containers:  
  22.       - name: master  
  23.         image: k8s.gcr.io/redis:e2e  # or just image: redis  
  24.         resources:  
  25.           requests:  
  26.             cpu: 100m  
  27.             memory: 100Mi  
  28.         ports:  
  29.         - containerPort: 6379   
Let's dive deeper into the code to understand the provided parameters,
 
Line 2
 
This states that we are creating a deployment. A deployment is a wrapper around Pods that makes it easy to update and scale Pods.
  1. kind: Deployment  
Lines 4-6
 
Here, the Deployment is given a name, which is redis-master.
  1. name: redis-master    
  2. labels:    
  3.   app: redis    
Lines 7-12
 
These lines let us specify the containers that this Deployment will manage. In this example, the Deployment will select and manage all containers for which labels match (app: redis, role: master, and tier: backend). The preceding label exactly matches the labels provided in lines 14-19. 
  1. spec:    
  2.   selector:    
  3.     matchLabels:    
  4.       app: redis    
  5.       role: master    
  6.       tier: backend   
Line 13
 
This tells Kubernetes that we need exactly one copy of the running Redis master. This is a key aspect of the declarative nature of Kubernetes. You provide a description of the containers your applications need to run (in this case, only one replica of the Redis master), and Kubernetes takes care of it.
  1. replicas: 1    
Lines 14-19
 
Adds labels to the running instance so that it can be grouped and connected to other containers. We will discuss them later to see how they are used. 
  1. template:    
  2.   metadata:    
  3.     labels:    
  4.       app: redis    
  5.       role: master    
  6.       tier: backend    
Line 22
 
Gives this container a name, which is master. In the case of a multicontainer Pod, each container in a Pod requires a unique name.
  1. - name: master   
Line 23
 
This line indicates the Docker image that will be run. In this case, it is the Redis image tagged with e2e (the latest Redis image that successfully passed its end-to-end [e2e] tests).
  1. image: k8s.gcr.io/redis:e2e  # or just image: redis    
Lines 24-27
 
Sets the cpu/memory resources requested for the container. In this case, the request is 0.1 CPU, which is equal to 100m and is also often referred to as 100 millicores. The memory requested is 100Mi, or 104857600 bytes, which is equal to ~105MB
  1. resources:    
  2.          requests:    
  3.            cpu: 100m    
  4.            memory: 100Mi   
Lines 28-29
 
These two lines indicate that the container is going to listen on port 6379.
  1. - containerPort: 6379    
Now you have deployed the Redis master and learned about the syntax of the YAML file that was used to create this deployment. In the next step you will examine the deployment and learn about the different elements that were created.
 
Explore the deployment
 
The redis-master deployment has been completed. To explore the deployment type the following command in Azure Cloud Shell.
  1. kubectl get all  
Your output should be like the mentioned screenshot.
 
Application Deployment On Azure Kubernetes Service
 
You can see that we have a deployment named redis-master. It controls a ReplicaSet of redis-master-<random-id>. On further examination, you will also find that the ReplicaSet is controlling a Pod, redis- master-<replica set random id>-<random id>
 
More details can be obtained by executing the kubectl describe <object> <instance name> command, as follows,
  1. kubectl describe deployment/redis-master  
This will generate an output as follows,
 
Application Deployment On Azure Kubernetes Service
 
You have now launched a Redis master with the default configuration. Basically, you would launch an application with an environment-specific configuration. So, before proceeding to the next part which is 2, we need to clean up the current version, and we can do so by running the following command,
  1. kubectl delete deployment/redis-master