Getting Started With Kubernetes - Part Two

Introduction 

 
In previous part, Getting Started with Kubernetes Part 1, we learned about Kubernetes. In this article, we will be learning about the architecture of Kubernetes. The below topics will be covered as part of this article:
  • Kubernetes Architecture
    • Master Node
      • API Server
      • ETCD Server
      • Kube Scheduler
    • Controller manager
      • Node Components
      • Kubelet
      • Pods
    • Overlay Network 

Kubernetes Architecture

 
As we have now seen some of the benefits and history, now let’s understand the architecture of Kubernetes.
 
Kubernetes
 

Master Node

 
Master Node is called the control plane and it has 3 further things, namely API Server, Scheduler and Controller Manager.
 
API Server
 
This enables all the communication b/w API, we are going to talk to Kube API Server only. It takes the request and sends it to other services.
 
We can use Kubectl CLI to manage the Kubernetes Cluster.
 
Kubectl sends the request to the API server and then API Server responds back.
 
Kubernetes
 
ETCD Server
 
Kube API Server stores all the information in Etcd and other services also reads and store the information in the Etcd storage.
 
If we have multiple Kubernetes masters, then we can set up multiple ETCD Server clustered together syncing all the data.
 
It should be backed up regularly.
 
It stores the current state of everything in the cluster here at the ETCD server.
 
Kubernetes
 
Kube Scheduler
 
It picks up the container and puts it on the right node based on different factors.
 
Kubernetes
 

Controller Manager

 
There are different types of controllers and are all part of the Controller Manager.
 
Kubernetes
 
The node controller is responsible for checking the status of the node
 
Node Components
 
There are 3 node components:
 
Kubernetes
 
Kubelet is the agent that listens to the request of master and is going to do all the heavy lifting.
 
Suppose if it gets a request that it needs to launch suppose X no of pods. So Kubelet is going to fetch the image, run the container from the image, etc.
 
There are different add-ons that can be added to the Kubernetes, like monitoring for container resources to log at the cluster level, or we can use third-party tools like Splunk, etc.
 
Kubernetes
 
Now let’s see the entire diagram again.
 
Kubernetes
  • Kubectl sends the request to the API server.
  • API server stores the information in the Etcd storage.
  • The scheduler will pick up such information and if the information is like Create pod/container, it will find the right node based on the algorithms and will identify the worker node and then send the information to the Kubelet on that node.
  • Kubelet will receive the information and do things like pulling images, running containers, assigning port, etc.
If we say we need 4 pods or replication of this container, then this request goes to the controller manager and the controller manager monitors/manages that and will make sure that 4 pods will be created inside the worker node.
 
Pods
 
So far, we were running containers directly. In Kubernetes we have Pods and we don’t run containers directly. We always create pods and manage pods.
 
This is the smallest entity in the Kubernetes cluster. A pod can contain 1 or many containers.
 
We can have a pod running web service, database service, or the pod can be running multiple containers.
 
Kubernetes
 
We should run suppose web service, database service in separate pods not in the same pod. However, we can run some helper services like logging services in the same pod. Every pod will have an IP address.
 
The container runs the actual service and if it’s a web service, it will have a port. That port will be of the container and the pod will have the IP address. If we run multiple containers in a pod, they all will have the same IP address.
 
If we attach a volume, then that will be accessed by all the containers running inside the pod.
 
Kubernetes
 
Sometimes we run multiple containers inside a single pod, for example, Node 2.
 
Here, we can see that we have an Init container (It could be something like cloning the git repo), then we have the main container which will actually be running the service (web server) and we have a sidecar container that can be a logging or monitoring service.
 
Sometimes people mistakenly put multiple main processes (container) in a single pod, which is actually doable but not recommended.
 

Overlay Network

 
Now before we actually launch or setup our Kubernetes cluster, Let's understand the Overlay network.
 
Here we have multiple docker engines and worker nodes working together. There are multiple pods distributed across the cluster.
 
Imagine we have a 3 node Kubernetes cluster. Now let’s see how they interact. When we set up Kubernetes then this communication or Overlay network is enabled by default.
 
Kubernetes
 
Here every pod can communicate with each other in the Kubernetes cluster.
 
Pod gets an IP address from the Overlay network and this all happens automatically.
 

Summary

 
In this article, we understood the architecture of Kubernetes in detail. In the next article, we will be seeing the set up of Kubernetes.
 
I hope you find this article helpful. Stay tuned for more … Cheers!!


Similar Articles