Kubernetes  

From Localhost to Live Deploying n8n on Kubernetes

What started as a simple kubectl apply turned into a multi-day saga of debugging, YAML rewrites, and relentless persistence.
But by the end of it, I had something far more valuable than a running workflow automation tool — I had a deeper understanding of Kubernetes internals, storage behavior, and how patience is the hidden prerequisite to DevOps.

n8nonk8s

This is my real experience deploying n8n — an open-source workflow automation tool — on Kubernetes (v1.32), running across Proxmox-based Ubuntu 24.04 VMs.

If you’ve ever felt frustrated setting up persistent volumes, PVC bindings, or NodePorts, trust me — you’ll relate.

Why I Wanted n8n on Kubernetes

I’ve always loved how n8n lets you automate anything: email workflows, Slack alerts, API integrations, you name it. But running it locally felt limiting.

I wanted high availability , data persistence , and the ability to manage multiple automations like microservices.

That’s when Kubernetes came to mind.

Instead of relying on Docker Compose, I could separate the n8n container, PostgreSQL database, and storage layers, and let Kubernetes handle the rest — scalability, restarts, and networking.

At least, that was the plan.

Step 1: Setting Up the Namespace and Storage

I started with a dedicated namespace:

  
    kubectl create namespace n8n
  

Next came storage.

I created two Persistent Volumes (PV) and two Persistent Volume Claims (PVC) — one pair for n8n, another for PostgreSQL.

Here’s the first version of my n8n1.yaml:

Screenshot 2025-10-23 at 4.35.10 PM

I applied it with confidence:

  
    kubectl apply -f n8n1.yaml -n n8n
  

And then… the PVC stayed Pending forever.

Screenshot 2025-10-23 at 2.42.26 PM

The Storage Saga: When 5 Gi Broke My Cluster

At first, I assumed it was a syntax issue. Then I suspected my host path permissions.

Hours later, I realized the truth — my Proxmox VM setup didn’t have enough storage provisioned to satisfy the 5 Gi request.

Kubernetes doesn’t resize reality. If your node doesn’t have available space matching the claim, it quietly leaves the PVC pending.

When you define persistent storage for a database like PostgreSQL in Kubernetes, you use two key objects: a Persistent Volume (PV), which represents the raw storage space, and a Persistent Volume Claim (PVC), which is the request for that space.

Most online examples default to a generous size, maybe 5Gi or 10Gi. While this seems logical ("more space is better!"), It often leads to silent deployment failures, especially in home lab or restricted K8s environments (like those running in VMs on a host like Proxmox).

The 5Gi Problem

In a typical bare-metal or single-node K8s setup, your cluster might be configured with a basic storage provisioner or a custom-defined set of available Persistent Volumes. This setup often imposes invisible limits:

  • Cluster Defaults: Sometimes, the default storage class or local provisioner is hard-coded to offer a maximum of 2Gi for a local volume, or it might struggle to match a request as large as 5Gi.

  • The Waiting Game: When your PVC requests 5Gi, and the cluster only has 2Gi volumes available, the PVC remains stuck in the Pending state. Your deployment never starts, and you scratch your head, wondering what went wrong.

The 2Gi Solution

For a fresh, vanilla n8n installation, the PostgreSQL database file is tiny—often less than 50MB.

By reducing the request in the PVC to 2Gi, you achieve a few things:

  1. Faster Binding: You significantly increase the chances of the PVC immediately binding to an available PV, respecting any unwritten or default limits your cluster may have.

  2. Sufficient Space: 2Gi is still ample space for a PostgreSQL database that stores n8n workflows and several hundred execution logs. You can always expand this later (though resizing PVs is another complex topic!).

The Takeaway: When starting out, be conservative with storage requests. Prioritize binding the PVC at a stable, lower amount (2Gi) over requesting a potentially unavailable large amount (5Gi).

I changed the YAML to 2 Gi, reapplied, and — it worked instantly.

  
    resources:
  requests:
    storage: 2Gi
  

Lesson learned: always align your PV capacity with actual node disk availability, not just what you wish you had.

It’s a small fix that saves hours of confusion.

Step 2: Deploying n8n and PostgreSQL Using a Single YAML File

Now that our Kubernetes cluster is up and running, it’s time for the real action — deploying n8n and its PostgreSQL database in one smooth go.

Rather than juggling multiple YAML files for Persistent Volumes, Services, and Deployments, let’s combine everything into a single manifest. This approach not only saves time but also ensures that your n8n and database components are perfectly aligned — the way they should be in production.

  
    # 1. Namespace for all resources
apiVersion: v1
kind: Namespace
metadata:
  name: n8n

---
# 2. Persistent Volume (PV) - HostPath (Adjusted to 2Gi per your observation)
#    NOTE: You MUST manually create the directory /mnt/data/postgres on your node
#    and set permissions (sudo chown -R 999:999 /mnt/data/postgres).
apiVersion: v1
kind: PersistentVolume
metadata:
  name: postgres-pv
spec:
  capacity:
    storage: 2Gi # Adjusted size
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  hostPath:
    path: "/mnt/data/postgres"

---
# 3. Persistent Volume Claim (PVC) - Links the PV to the Deployment
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: postgres-pvc
  namespace: n8n
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi # Adjusted size
  volumeName: postgres-pv

---
# 4. PostgreSQL Database Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres
  namespace: n8n
  labels:
    app: postgres
spec:
  replicas: 1
  selector:
    matchLabels:
      app: postgres
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
        - name: postgres
          image: postgres:15
          env:
            - name: POSTGRES_DB
              value: n8n
            - name: POSTGRES_USER
              value: n8n
            - name: POSTGRES_PASSWORD
              value: Password@123 # CHANGE ME to a secure password
          ports:
            - containerPort: 5432
          volumeMounts:
            - mountPath: /var/lib/postgresql/data
              name: postgres-storage
      volumes:
        - name: postgres-storage
          persistentVolumeClaim:
            claimName: postgres-pvc

---
# 5. PostgreSQL Service (ClusterIP for internal n8n connection)
apiVersion: v1
kind: Service
metadata:
  name: postgres
  namespace: n8n
spec:
  type: ClusterIP
  selector:
    app: postgres
  ports:
    - port: 5432
      targetPort: 5432

---
# 6. n8n Application Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: n8n
  namespace: n8n
  labels:
    app: n8n
spec:
  replicas: 1
  selector:
    matchLabels:
      app: n8n
  template:
    metadata:
      labels:
       app: n8n
    spec:
      containers:
        - name: n8n
          image: n8nio/n8n:latest
          ports:
            - containerPort: 5678
          env:
            # Database Configuration
            - name: DB_TYPE
              value: postgres
            - name: DB_POSTGRESDB_HOST
              value: postgres
            - name: DB_POSTGRESDB_PORT
              value: "5432"
            - name: DB_POSTGRESDB_DATABASE
              value: n8n
            - name: DB_POSTGRESDB_USER
              value: n8n
            - name: DB_POSTGRESDB_PASSWORD
              value: Password@123 # CHANGE ME to a secure password

            # n8n General Configuration
            - name: GENERIC_TIMEZONE
              value: Asia/Kolkata # Recommended change to India's Timezone
            - name: N8N_HOST
              value: hostname.com # Hostname for n8n to generate correct URLs
            - name: N8N_PORT
              value: "5678"
            - name: N8N_PROTOCOL
              # Important: Change to 'https' if you have TLS/Cert-Manager set up
              value: "http"
            - name: N8N_SECURE_COOKIE
              value: "false"

            # Webhook Configuration (Uses the public Ingress URL)
            - name: WEBHOOK_URL
              # NOTE: Change http to https if you enable TLS in the Ingress below!
              value: "http://hostname.com"

---
# 7. n8n Service (ClusterIP, exposed internally to the Ingress)
apiVersion: v1
kind: Service
metadata:
  name: n8n-service
  namespace: n8n
spec:
  type: ClusterIP # Changed from NodePort to ClusterIP, as Ingress handles external exposure
  selector:
    app: n8n
  ports:
    - port: 80 # The port the Ingress hits
      targetPort: 5678 # n8n's internal container port

---
# 8. Ingress Resource (Exposes the Service via the custom hostname)
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: n8n-ingress
  namespace: n8n
  annotations:
    # Use the appropriate ingress class for your controller (e.g., 'nginx')
    kubernetes.io/ingress.class: "nginx"
    # Recommended for NGINX Ingress to correctly handle redirects and paths
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
    - host: hostname.com # YOUR CUSTOM DOMAIN
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: n8n-service
                port:
                  number: 80
  

YAML Explanation

The manifest is broken down into seven sequential Kubernetes resources using --- as separators:

PartResource TypePurposeKey Details
1NamespaceCreates an isolated environment (n8n) for all resources.All subsequent resources are deployed into this namespace.
2PersistentVolumeDefines the actual storage volume, using a hostPath type.The capacity is set to 2Gi for stability. hostPath: /mnt/data/postgres links it to a specific directory on a worker node.
3PersistentVolumeClaimA request by the application for storage.The storage: 2Gi request matches the PV capacity. This resource is dynamically bound to the PV.
4DeploymentManages the PostgreSQL database container.Sets environment variables for the database name and passwords, and uses volumeMounts to connect to the postgres-pvc for persistent data.
5ServiceCreates an internal, stable endpoint for PostgreSQL.Uses ClusterIP so the n8n pod can always find the database via the name postgres on port 5432.
6DeploymentManages the n8n application container.Sets database connection variables and the critical WEBHOOK_URL to ensure generated webhooks use the public-facing NodePort address.
7ServiceExposes the n8n application to the outside world.Uses NodePort type. This maps targetPort: 5678 (n8n's internal port) to a random port on all node IPs (e.g., $30080$).

Deploying Everything in One Command

Now, apply this manifest in your cluster:

  
    kubectl apply -f n8n-deployment.yaml
  

Give it a minute or two, then verify that both n8n and PostgreSQL are running smoothly:

  
    kubectl get pods -n n8n
kubectl get svc n8n-service -n n8n
  

Expected output:

Screenshot 2025-10-23 at 3.52.54 PMScreenshot 2025-10-22 at 10.28.41 PM

Exposing the Application to Access Outside the Cluster

When I deployed my N8n application on Kubernetes, it initially ran perfectly within the cluster — but only accessible internally through a ClusterIP service. In simple terms, this means the application was running inside the Kubernetes network but wasn’t reachable from my local system or the public internet.

To make it accessible from outside, I needed to expose it using a NodePort . This approach allows the service to open a specific port on each node, making it reachable through the node’s IP address and that port number.

So, I patched my existing N8n service to change its type from ClusterIP to NodePort . Instead of editing the YAML file, I used the kubectl patch command — it’s faster and doesn’t require redeploying the service:

  
    # Change the service type from ClusterIP to NodePort
kubectl patch svc n8n-service -n n8n -p '{"spec": {"type": "NodePort"}}'
  

After applying this command, I verified the new port assigned to the service:

  
    kubectl get svc n8n-service -n n8n
  

The output showed the updated configuration, including the assigned NodePort (usually in the range 30000–32767 ). For example:

Screenshot 2025-10-23 at 3.59.24 PM

Here, the NodePort is 32539 .

Finally, I could access my N8n instance using my Kubernetes node’s IP address and this port. For example:

  
    http://192.168.0.120:32539
  

This made my N8n application accessible from both my local network and, if configured properly, from external users as well.

Screenshot 2025-10-22 at 10.13.58 PMScreenshot 2025-10-23 at 4.03.15 PM

Scaling and Persistence

Want to scale? Simply update replicas:

  
    kubectl scale deployment n8n --replicas=3 -n n8n
  

Since n8n uses a database backend, you can safely scale horizontally — all instances share the same workflow data.

Persistence is handled via PVCs, ensuring your workflows and credentials survive restarts and upgrades.

Troubleshooting and The Human Element

Even with the "perfect" YAML, things go wrong. As a systems administrator, 90% of my job is figuring out why the machine didn't do what the manual said it should.

Anecdote: The Immutable Field Trap

When updating the PV to fix the $5 \text{Gi}$ vs $2 \text{Gi}$ issue, I learned about immutability firsthand. I tried to change the hostPath location in the YAML and got the error you also encountered:

"spec.persistentvolumesource: Forbidden: spec.persistentvolumesource is immutable after creation"

This error is a protective measure by Kubernetes. It prevents you from accidentally re-pointing a live database's volume to a completely different physical path without acknowledging the data risk.

The Solution is Ceremonial: You can't just edit the PV; you have to treat it like a database migration.

  1. Delete the workload ( postgres deployment).

  2. Delete the claim ( postgres-pvc ).

  3. Delete the volume ( postgres-pv ).

  4. Crucially: Go to the host node, verify your data is safe (because of the Retain policy), and make sure the new folder is ready.

  5. Re-apply the corrected YAML.

  6. It’s an extra step, but it forces you to respect the storage layer, which is where most Kubernetes databases fail.

Final Thoughts: Lessons from the Journey

Deploying n8n on Kubernetes wasn’t just about YAML manifests — it was a deep dive into how automation platforms scale in real-world environments.

Along the way, I learned:

  • Labels and selectors matter. One typo can break connectivity.

  • Persistence is gold. PVCs prevent painful data loss.

  • PostgreSQL tuning improves n8n performance dramatically.

  • Debugging teaches you more than a flawless deploy ever will.

Today, my n8n instance runs seamlessly across a Kubernetes cluster, automating backups, posting alerts, and orchestrating container workflows — all in one system.

If you’ve ever thought of scaling your n8n beyond Docker, Kubernetes is your next step.
It may take a few YAML retries and some patience, but once it’s up, you’ll see why automation truly belongs in the cloud-native world.