Docker  

Case Study: How Containerization Transformed a Legacy Ticketing System Into a Regulated SaaS Engine

In fintech, compliance isn’t a feature. It’s the architecture.

The Hidden Cost of “It Works on My Machine

In the world of financial compliance, where every transaction is scrutinized and every audit trail must survive seven years of regulatory scrutiny, the tools used to track investigations are rarely glamorous — but they’re absolutely critical.

One RegTech startup, building an AI-powered transaction monitoring platform for banks across Europe and Asia, relied on Request Tracker (RT) — a powerful, open-source ticketing system — to log, assign, and resolve suspicious activity alerts. RT offered custom fields, email integration, and audit logs. It was reliable. It was familiar.

But as the number of clients grew beyond a dozen — each governed by different data sovereignty laws (GDPR in Germany, PDPA in Singapore, etc.) — the infrastructure behind RT began to crack under the weight of its own complexity.

  • Each client needed complete data isolation.

  • Each client had different custom plugins.

  • Each server ran a different OS version, with mismatched Perl dependencies and undocumented configurations.

Upgrades were risky. Backups were manual. Audits took weeks.

The engineering team wasn’t failing because they were unskilled — they were failing because their architecture was built for a world that no longer existed.

The Shift: From Servers to Sovereign Instances

Instead of patching servers, the team made a radical decision:

Stop treating RT as a monolithic application. Start treating it as a regulated SaaS component.

They didn’t abandon RT.

They re-architected it.

Every client — no matter their location, size, or regulatory regime — would now run in a self-contained, version-controlled, immutable container stack .

No shared dependencies. No shared databases. No shared risk.

The foundation? Docker and Kubernetes , orchestrated through GitOps .

The goal? Zero-touch deployment, full auditability, and global scalability — all while staying compliant with MiFID II, FATF, and GDPR.

The Architecture: One Template, Infinite Deployments

At the heart of the new system was a single, reusable template — stored in Git and applied identically to every client.

The Core Stack

COMPONENTIMAGEPURPOSE
RT Applicationbestpractical/rt:5.0.4Core ticketing engine with compliance plugins
Databasepostgres:15Dedicated PostgreSQL instance per client
Reverse Proxynginx:alpineTLS termination, client-specific routing
StorageDocker Volumes + S3Immutable, versioned backups

Each client’s environment was isolated at the network, storage, and process level — enforced by container boundaries and Kubernetes namespaces.

Here’s the complete docker-compose.yml template used across all deployments:

  
    version: '3.8'

services:
  rt-db:
    image: postgres:15
    container_name: rt-db-${CLIENT_ID}
    environment:
      POSTGRES_DB: rt_${CLIENT_ID}
      POSTGRES_USER: rt_user
      POSTGRES_PASSWORD: ${DB_PASSWORD}
    volumes:
      - ./volumes/${CLIENT_ID}/db:/var/lib/postgresql/data
    networks:
      - compliance-net
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U rt_user -d rt_${CLIENT_ID}"]
      interval: 10s
      timeout: 5s
      retries: 5

  rt-app:
    image: complianceguard/rt:${CLIENT_VERSION}
    container_name: rt-app-${CLIENT_ID}
    depends_on:
      rt-db:
        condition: service_healthy
    environment:
      RT_DB_HOST: rt-db
      RT_DB_NAME: rt_${CLIENT_ID}
      RT_DB_USER: rt_user
      RT_DB_PASS: ${DB_PASSWORD}
      RT_SITE_CONFIG: /opt/rt5/etc/RT_SiteConfig.pm
    volumes:
      - ./volumes/${CLIENT_ID}/rt-config:/opt/rt5/etc
      - ./volumes/${CLIENT_ID}/logs:/opt/rt5/var/log
    ports:
      - "${CLIENT_PORT}:80"
    networks:
      - compliance-net
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost"]
      interval: 30s
      timeout: 10s
      retries: 3

  nginx:
    image: nginx:alpine
    container_name: rt-nginx-${CLIENT_ID}
    ports:
      - "${CLIENT_PORT}:80"
    volumes:
      - ./volumes/${CLIENT_ID}/nginx.conf:/etc/nginx/conf.d/default.conf
      - ./volumes/${CLIENT_ID}/certs:/etc/nginx/certs
    depends_on:
      - rt-app
    networks:
      - compliance-net

networks:
  compliance-net:
    driver: bridge
  

Environment variables (CLIENT_ID, CLIENT_VERSION, CLIENT_PORT, DB_PASSWORD) are injected at deploy time — never hardcoded. Every client gets its own domain: client123.complianceplatform.io

The Custom RT Image: Compliance in a Box

The base RT image was extended to include hardened configurations and compliance-specific functionality.

  
    FROM bestpractical/rt:5.0.4

# Install custom compliance plugins
COPY plugins/ /opt/rt5/local/plugins/
RUN /opt/rt5/sbin/rt-setup-database --action insert --datafile /opt/rt5/local/plugins/ComplianceGuard/lib/ComplianceGuard/Schema/Schema.pm

# Apply configuration overrides
COPY etc/RT_SiteConfig.pm /opt/rt5/etc/RT_SiteConfig.pm
COPY etc/RT_Config.pm /opt/rt5/etc/RT_Config.pm

# Enable external authentication and audit logging
RUN echo "Set(\$WebExternalAuth, 1);" >> /opt/rt5/etc/RT_SiteConfig.pm && \
    echo "Set(\$WebExternalAuthAuto, 1);" >> /opt/rt5/etc/RT_SiteConfig.pm

# Ensure secure permissions
RUN chown -R www-data:www-data /opt/rt5/etc /opt/rt5/local/plugins && \
    chmod 644 /opt/rt5/etc/RT_SiteConfig.pm

HEALTHCHECK CMD curl -f http://localhost || exit 1
  

The custom plugins added:

  • Automatic tagging of tickets with regulatory frameworks (e.g., "FATF-12", "GDPR-Article-30")

  • Email-to-ticket ingestion from client-specific compliance addresses

  • SLA-based auto-closure timers

  • JSON-LD audit trail exports compliant with ISO 27001 and NIST SP 800-53

The image was built once, tested rigorously, and reused across 50+ client deployments — with zero configuration drift.

CI/CD: Git as the Single Source of Truth

Every change to a client’s configuration triggered an automated pipeline — from code commit to live deployment — in under 10 minutes.

  
    name: Deploy Client

on:
  push:
    branches: [ main ]
    paths:
      - 'clients/**'

jobs:
  build-and-deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Extract Client ID
        run: |
          CLIENT_DIR=$(echo $GITHUB_PATH | grep -o "clients/[^/]*")
          echo "CLIENT_ID=${CLIENT_DIR#clients/}" >> $GITHUB_ENV

      - name: Build RT Image
        run: |
          docker build -t complianceguard/rt:${{ env.CLIENT_ID }}:v2.1.3 -f clients/${{ env.CLIENT_ID }}/Dockerfile .

      - name: Login to Docker Registry
        uses: docker/login-action@v3
        with:
          username: ${{ secrets.DOCKER_USERNAME }}
          password: ${{ secrets.DOCKER_PASSWORD }}

      - name: Push Image
        run: |
          docker push complianceguard/rt:${{ env.CLIENT_ID }}:v2.1.3

      - name: Update Kubernetes Manifest
        run: |
          sed -i "s|image: complianceguard/rt:.*|image: complianceguard/rt:${{ env.CLIENT_ID }}:v2.1.3|g" clients/${{ env.CLIENT_ID }}/k8s/deployment.yaml
          git config --global user.email "[email protected]"
          git config --global user.name "CI Bot"
          git add clients/${{ env.CLIENT_ID }}/k8s/deployment.yaml
          git commit -m "chore: update ${CLIENT_ID} to v2.1.3" || exit 0
          git push

      - name: Trigger ArgoCD Sync
        run: |
          curl -X POST https://argocd.platform.com/webhook \
            -H "Content-Type: application/json" \
            -d '{"repository": "https://github.com/org/rt-infra", "revision": "main"}'
  

ArgoCD monitored the clients/ directory in Git. When a new commit appeared, it automatically synchronized the client’s Kubernetes cluster — no human intervention required.

Deployment: Regional Isolation, Global Scale

Each client was deployed to a dedicated Kubernetes namespace in a cloud region matching their legal jurisdiction:

  
    apiVersion: apps/v1
kind: Deployment
metadata:
  name: rt-app-client45
  namespace: compliance-client45
spec:
  replicas: 1
  selector:
    matchLabels:
      app: rt-app
  template:
    metadata:
      labels:
        app: rt-app
    spec:
      containers:
        - name: rt-app
          image: complianceguard/rt:client45:v2.1.3
          ports:
            - containerPort: 80
          env:
            - name: CLIENT_ID
              value: "client45"
            - name: DB_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: rt-secrets-client45
                  key: db-password
          volumeMounts:
            - name: rt-config
              mountPath: /opt/rt5/etc
            - name: rt-logs
              mountPath: /opt/rt5/var/log
      volumes:
        - name: rt-config
          persistentVolumeClaim:
            claimName: rt-config-pvc-client45
        - name: rt-logs
          persistentVolumeClaim:
            claimName: rt-logs-pvc-client45
---
apiVersion: v1
kind: Service
metadata:
  name: rt-service-client45
  namespace: compliance-client45
spec:
  type: LoadBalancer
  selector:
    app: rt-app
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
      name: http
  
github

Every client had:

  • A dedicated namespace

  • Encrypted secrets (via Vault or Kubernetes Secrets), secrets were injected via Vault Agent sidecars, so that no secrets were ever baked into container images or repos.

  • Persistent volumes stored in the same region as the client’s legal base

  • A public domain: client45.complianceplatform.io

When a regulator requested historical data for ticket lets say for example RT-78921 from a German client, the response wasn’t a spreadsheet or a manual search.

It was a cryptographically signed, timestamped, immutable backup .

  
    #!/bin/bash
CLIENT_ID="client45"
TIMESTAMP=$(date +%Y-%m-%dT%H:%M:%SZ)
BACKUP_FILE="/backups/${CLIENT_ID}-rt-${TIMESTAMP}.sql.gz"

docker exec rt-db-${CLIENT_ID} pg_dump -U rt_user rt_${CLIENT_ID} | gzip > $BACKUP_FILE

# Sign with GPG (private key from secure vault)
gpg --detach-sign --armor --output $BACKUP_FILE.asc $BACKUP_FILE

# Upload to S3 with legal hold and versioning
aws s3 cp $BACKUP_FILE s3://compliance-backups/${CLIENT_ID}/ --acl bucket-owner-full-control
aws s3 cp $BACKUP_FILE.asc s3://compliance-backups/${CLIENT_ID}/

# Log to immutable audit trail
echo "$(date): Backup created for ${CLIENT_ID} at ${TIMESTAMP}" >> /audit/log.txt
  

Regulators received:

  • A .sql.gz database dump

  • A .asc GPG signature (verifiable with a public key)

  • A timestamped, cryptographically signed log entry

— All stored in immutable S3 buckets with versioning and legal hold enabled.

This is not about Request Tracker. This is about how regulated software must be built today. Legacy tools — whether ticketing systems, identity managers, or log aggregators — can’t be patched into compliance. They must be designed for it from the ground up.

Containerization turned a brittle, manual process into a scalable, auditable, and repeatable system . GitOps turned configuration drift into version-controlled trust . Kubernetes turned chaos into orchestrated sovereignty.

In fintech, compliance isn’t a feature.

It’s the foundation.

And containers?

They’re the bricks.

(Note: This is a generalized template. No proprietary client data, names, or configurations are included.)

Use it. Learn from it. Adapt it.