Kubernetes  

Vanilla Kubernetes

Pre-requisite to understand this

  • Containers: Lightweight units to package applications and dependencies

  • Docker basics: Common tool to build and run containers

  • Microservices architecture: Splitting apps into smaller independent services

  • YAML syntax: Configuration language used to define Kubernetes resources

  • Cloud/VM concepts: Understanding nodes and distributed systems

Introduction

Kubernetes (often called K8s) is an open-source container orchestration system used to automate deployment, scaling, and management of containerized applications. “Vanilla Kubernetes” refers to the pure, upstream version without vendor-specific enhancements (like those from cloud providers such as EKS, AKS, or GKE). It provides a standardized way to run distributed systems reliably across clusters of machines. Kubernetes abstracts infrastructure complexity and ensures that applications run consistently regardless of environment.

What problem we can solve with this?

Modern applications are increasingly built using microservices and containers, but managing them manually becomes extremely complex as scale increases. Kubernetes solves challenges around deployment, scaling, fault tolerance, and service discovery. It ensures that applications are always running in the desired state, automatically recovering from failures and distributing workloads efficiently across nodes. It also simplifies rolling updates and rollbacks without downtime. In distributed systems, networking and communication between services can become difficult Kubernetes provides built-in mechanisms for that. Overall, it reduces operational overhead and improves reliability and scalability of applications.

Key problems solved:

  • Container orchestration: Automates deployment and lifecycle management

  • Auto-scaling: Adjusts resources based on load

  • Self-healing: Restarts failed containers automatically

  • Service discovery: Enables communication between services

  • Load balancing: Distributes traffic efficiently

  • Rolling updates: Deploys updates without downtime

  • Infrastructure abstraction: Hides underlying hardware complexity

How to implement/use this?

To use vanilla Kubernetes, you typically set up a cluster consisting of a control plane (master) and multiple worker nodes. The control plane manages the cluster, while nodes run containerized workloads. You define applications using YAML manifests describing Pods, Deployments, Services, etc. Once applied, Kubernetes continuously ensures that the actual state matches the desired state. You interact with the cluster using tools like kubectl. Networking, storage, and scaling are handled automatically via built-in controllers. In production, you also configure monitoring, logging, and security policies. The system operates declaratively, meaning you describe what you want, and Kubernetes figures out how to achieve it.

Steps to implement:

  • Install cluster: Use kubeadm or tools like Minikube/kind

  • Configure kubectl: CLI to interact with cluster

  • Create YAML manifests: Define Pods, Deployments, Services

  • Apply configurations: Use kubectl apply

  • Monitor workloads: Check status using kubectl commands

  • Expose services: Use Service or Ingress

  • Scale applications: Update replicas dynamically

Sequence Diagram

This sequence shows how a deployment request flows through Kubernetes. The user submits a YAML configuration using kubectl, which communicates with the API Server—the central control point. The API server stores the desired state and notifies the Controller Manager, which ensures that the required number of pods are running. The Scheduler selects an appropriate node for the pod based on resource availability. Once assigned, the Kubelet on that node interacts with the container runtime (like Docker or containerd) to start the container. Finally, the node continuously reports status back to the API server. This loop ensures the system remains consistent with the desired state.

seq

Steps explained:

  • User submits config: Defines desired application state

  • API Server receives request: Central entry point

  • Controller Manager acts: Maintains desired state

  • Scheduler assigns node: Chooses optimal node

  • Kubelet executes: Runs workload on node

  • Container runtime runs app: Starts containers

  • Status feedback loop: Ensures system health

Component Diagram

This component diagram shows the architecture of a Kubernetes cluster. The Control Plane manages the overall system, with the API Server acting as the gateway, etcd storing cluster state, Scheduler assigning workloads, and Controller Manager enforcing desired states. Worker nodes execute workloads, where Kubelet ensures containers are running and Kube Proxy manages networking. The flow begins with a user request and moves through scheduling, execution, and exposure of services. Each component plays a specific role in maintaining cluster stability and ensuring applications run as intended. This modular architecture allows Kubernetes to scale and remain fault-tolerant.

comp

Steps explained:

  1. User request: Initiates deployment

  2. State storage: Persisted in etcd

  3. Scheduling trigger: Need to place pod

  4. Node assignment: Scheduler decision

  5. Pod delivery: Sent to node

  6. Container execution: Runtime starts app

  7. Status reporting: Health feedback

  8. Service exposure: External/internal access

Advantages:

  1. Scalability: Easily scale applications horizontally

  2. High availability: Ensures minimal downtime

  3. Portability: Runs on any infrastructure

  4. Self-healing: Automatically replaces failed components

  5. Declarative management: Define desired state in YAML

  6. Efficient resource usage: Optimizes node utilization

  7. Extensibility: Supports plugins and custom resources

Summary

Vanilla Kubernetes provides a powerful, flexible platform for managing containerized applications at scale. By abstracting infrastructure complexities and offering automation for deployment, scaling, and recovery, it simplifies modern application operations. Its declarative approach ensures consistency, while its modular architecture supports extensibility and resilience. Although it has a learning curve, mastering Kubernetes enables efficient management of distributed systems and is a critical skill in cloud-native development.