Kubernetes  

How to Deploy a Containerized Node.js or Python App to Kubernetes Step by Step?

Deploying a containerized Node.js or Python application to Kubernetes is a core DevOps practice for building scalable, resilient, and production-ready cloud-native systems. Kubernetes orchestrates containers, manages scaling, ensures high availability, and automates deployment rollouts.

This step-by-step guide explains how to containerize your application, create Kubernetes manifests, deploy to a cluster, expose services, and implement production best practices.

Prerequisites

Before starting, ensure you have:

  • Docker installed

  • kubectl CLI configured

  • Access to a Kubernetes cluster (Minikube, Kind, AKS, EKS, GKE, or on-prem)

  • A working Node.js or Python application

Step 1: Containerize the Application

Example Node.js App Structure

app.js
package.json

Create a Dockerfile:

FROM node:18-alpine

WORKDIR /app

COPY package*.json ./
RUN npm install

COPY . .

EXPOSE 3000

CMD ["node", "app.js"]

Example Python (FastAPI) Dockerfile

FROM python:3.11-slim

WORKDIR /app

COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt

COPY . .

EXPOSE 8000

CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]

Build the Docker image:

docker build -t myapp:1.0 .

Test locally:

docker run -p 3000:3000 myapp:1.0

Step 2: Push Image to Container Registry

Tag image:

docker tag myapp:1.0 your-dockerhub-username/myapp:1.0

Push image:

docker push your-dockerhub-username/myapp:1.0

You can use Docker Hub, Azure Container Registry, Amazon ECR, or Google Artifact Registry.

Step 3: Create Kubernetes Deployment Manifest

Create deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
      - name: myapp
        image: your-dockerhub-username/myapp:1.0
        ports:
        - containerPort: 3000
        resources:
          requests:
            cpu: "250m"
            memory: "256Mi"
          limits:
            cpu: "500m"
            memory: "512Mi"

Apply deployment:

kubectl apply -f deployment.yaml

Verify:

kubectl get pods

Step 4: Expose the Application Using a Service

Create service.yaml:

apiVersion: v1
kind: Service
metadata:
  name: myapp-service
spec:
  type: ClusterIP
  selector:
    app: myapp
  ports:
  - port: 80
    targetPort: 3000

Apply service:

kubectl apply -f service.yaml

For external access, change type to LoadBalancer or use NodePort.

Step 5: Configure Ingress for Production

Ingress provides HTTP routing and TLS termination.

Example ingress.yaml:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: myapp-ingress
spec:
  rules:
  - host: myapp.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: myapp-service
            port:
              number: 80

Apply ingress:

kubectl apply -f ingress.yaml

Ensure an Ingress controller (NGINX or cloud-managed) is installed.

Step 6: Configure Environment Variables and Secrets

Create a ConfigMap:

apiVersion: v1
kind: ConfigMap
metadata:
  name: myapp-config
data:
  NODE_ENV: production

Create a Secret:

apiVersion: v1
kind: Secret
metadata:
  name: myapp-secret
type: Opaque
data:
  DB_PASSWORD: cGFzc3dvcmQ=

Reference them inside Deployment:

envFrom:
- configMapRef:
    name: myapp-config
- secretRef:
    name: myapp-secret

Step 7: Enable Auto-Scaling

Horizontal Pod Autoscaler (HPA):

kubectl autoscale deployment myapp-deployment --cpu-percent=70 --min=2 --max=5

Verify HPA:

kubectl get hpa

Step 8: Rolling Updates and Zero Downtime Deployment

Update image version in deployment.yaml:

image: your-dockerhub-username/myapp:1.1

Apply update:

kubectl apply -f deployment.yaml

Kubernetes performs rolling updates automatically.

Check rollout status:

kubectl rollout status deployment/myapp-deployment

Step 9: Logging and Monitoring

Production Kubernetes deployments require observability:

  • Centralized logging (ELK stack)

  • Metrics (Prometheus + Grafana)

  • Tracing (OpenTelemetry)

  • Health checks (readiness and liveness probes)

Example liveness probe:

livenessProbe:
  httpGet:
    path: /health
    port: 3000
  initialDelaySeconds: 10
  periodSeconds: 15

Step 10: CI/CD Integration

Automate deployment using:

  • GitHub Actions

  • GitLab CI

  • Azure DevOps

  • ArgoCD

  • Flux

Typical CI/CD flow:

Code Commit → Build Docker Image → Push to Registry → Deploy to Kubernetes → Verify Health

Difference Between Deployment and StatefulSet

FeatureDeploymentStatefulSet
Use CaseStateless appsStateful apps
Pod IdentityDynamicStable
StorageEphemeralPersistent volumes
ScalingSimpleOrdered scaling
Common UsageAPIs, web appsDatabases

Node.js and Python web APIs are typically deployed using Deployments.

Common Production Mistakes

  • Hardcoding secrets inside Docker image

  • Not setting resource limits

  • Missing health probes

  • Ignoring auto-scaling

  • No monitoring setup

Avoiding these improves reliability and cost efficiency.

Summary

Deploying a containerized Node.js or Python application to Kubernetes involves building a Docker image, pushing it to a container registry, creating Deployment and Service manifests, configuring environment variables and secrets, enabling auto-scaling, and implementing rolling updates and monitoring. By following cloud-native best practices such as resource management, health checks, Ingress configuration, and CI/CD automation, developers can ensure their applications are scalable, secure, and production-ready in modern Kubernetes environments.