← ALL POSTS

From 60 Apps on Apache to Kubernetes - Infrastructure Migration in Wroclaw

How we migrated 60 applications from Apache2 and Docker to Kubernetes RKE2 with GitOps (ArgoCD), Helm Chart, and Sealed Secrets. A deployment case study.

Sixty applications. Half on Docker, half on Apache2. Three environments. Zero deployment repeatability. That was the starting point of one of our projects in Wroclaw, Poland β€” and exactly why the client reached out to us.

πŸ“‹ TL;DR

  • Initial state: ~60 apps (separate frontend + backend containers), PHP + Node.js, mix of Apache2 and Docker, databases on shared VMs
  • Solution: RKE2 (cluster per environment) + MetalLB + NGINX Ingress Controller + ArgoCD (GitOps) + universal Helm Chart + Sealed Secrets
  • CI/CD: GitLab pipeline β†’ build β†’ test β†’ push β†’ deploy (branch mapping: devβ†’DEV, masterβ†’RC, tagβ†’PROD)
  • Result: repeatable deployments, one-commit rollbacks, full auditability

⚠️ Problem: infrastructure that β€œsort of works”

The client ran their own data center with virtualization. Infrastructure grew organically β€” new app, new Apache vhost, copy-paste config from the previous project. The result?

ElementInitial state
Applications~60 (separate frontend + backend containers)
TechnologiesPHP (backend), Node.js (frontend)
Runtime~50% Docker, ~50% Apache2
DatabasesOn shared VMs
DeploymentsMix of manual and semi-automated
RollbackSSH + restoring files from backup

The problem wasn’t that apps didn’t work. They worked. But every deployment looked different. Rollback was a surgical operation. Onboarding a new team member meant weeks of learning context. And one bad production deploy could ruin a Friday afternoon.

What hurt the most?

  1. No repeatability β€” deploy to DEV looked different than deploy to PROD
  2. No change history β€” β€œwho deployed this?” was a question without an answer
  3. Manual rollbacks β€” reverting changes required SSH access and nerves of steel
  4. Scattered configuration β€” Apache vhosts, .env files, variables in different places
  5. No environment separation β€” DEV could affect RC

🎯 Client expectations

Requirements were specific and realistic:

  • Kubernetes cluster β€” a stable, scalable platform
  • Containerization support β€” migrating apps from Apache2 to containers
  • Dockerfile optimization β€” smaller images, faster builds, fewer layers
  • GitLab CI/CD β€” automated pipeline from code to production
  • Easy deploy and rollback β€” one-click deployment, stress-free rollbacks

Sounds like a wish list? Maybe. But every one of those wishes can be fulfilled when the architecture is thought through from the ground up.

πŸ”§ Solution: architecture step by step

Step 1: RKE2 β€” a cluster per environment

We chose RKE2 β€” a lightweight, CNCF-certified Kubernetes distribution from Rancher with a built-in CIS hardening profile.

Key architectural decision: separate cluster per environment.

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   RKE2 DEV  β”‚  β”‚   RKE2 RC   β”‚  β”‚  RKE2 PROD  β”‚
β”‚             β”‚  β”‚             β”‚  β”‚             β”‚
β”‚  3 nodes    β”‚  β”‚  3 nodes    β”‚  β”‚  5 nodes    β”‚
β”‚  dev apps   β”‚  β”‚  rc apps    β”‚  β”‚  prod apps  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Why separate clusters instead of namespaces?

ApproachProsCons
NamespacesCheaper, simplerShared resources, blast radius
Separate clustersFull isolation, independent upgradesHigher infra cost

We chose isolation. A failure on DEV doesn’t touch production. A Kubernetes upgrade on RC doesn’t risk PROD. Each environment has its own lifecycle.

πŸ’‘ Why RKE2 over kubeadm or k3s? The client required CIS hardening out of the box. RKE2 provides this by default. Additionally, Rancher’s stable update channel gave the client confidence they wouldn’t be stuck with an outdated cluster.

Step 2: MetalLB + NGINX Ingress Controller β€” the network layer

The client ran their own DC with no cloud load balancer available. We needed a way to expose services externally. The answer was MetalLB in L2 mode + NGINX Ingress Controller.

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚                   Client DC                      β”‚
β”‚                                                  β”‚
β”‚  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”   β”‚
β”‚  β”‚ MetalLB  │───►│  NGINX Ingress Controller  β”‚  β”‚
β”‚  β”‚ (VIP L2) β”‚    β”‚  (HTTP/HTTPS routing)      β”‚  β”‚
β”‚  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜   β”‚
β”‚                            β”‚                     β”‚
β”‚              β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”       β”‚
β”‚              β–Ό             β–Ό             β–Ό       β”‚
β”‚         β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”    β”‚
β”‚         β”‚ app1   β”‚   β”‚ app2   β”‚   β”‚ app3   β”‚    β”‚
β”‚         β”‚frontendβ”‚   β”‚backend β”‚   β”‚frontendβ”‚    β”‚
β”‚         β””β”€β”€β”€β”€β”€β”€β”€β”€β”˜   β””β”€β”€β”€β”€β”€β”€β”€β”€β”˜   β””β”€β”€β”€β”€β”€β”€β”€β”€β”˜    β”‚
β”‚                                                  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

MetalLB assigns IP addresses from a pool defined in its configuration. In a bare-metal environment, this is the only way to get type: LoadBalancer without a cloud provider:

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: default-pool
  namespace: metallb-system
spec:
  addresses:
    - 10.0.10.100-10.0.10.120
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: default
  namespace: metallb-system

NGINX Ingress Controller receives a VIP from MetalLB and routes HTTP/HTTPS traffic to the appropriate services based on Ingress rules:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: app1-frontend
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/proxy-body-size: "50m"
spec:
  ingressClassName: nginx
  tls:
    - hosts:
        - app1.client.pl
      secretName: app1-tls
  rules:
    - host: app1.client.pl
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: app1-frontend
                port:
                  number: 80
          - path: /api
            pathType: Prefix
            backend:
              service:
                name: app1-backend
                port:
                  number: 9000

πŸ’‘ Why MetalLB L2 instead of BGP? The client’s network didn’t support BGP on access switches. L2 works plug-and-play β€” just a pool of free IPs in the same network segment. For clusters with a dozen or so nodes, that’s more than enough.

Step 3: containerizing applications

Key architectural decision: frontend and backend are separate containers. Each has its own Dockerfile, its own image, its own deployment in Kubernetes. This allows them to be scaled, deployed, and rolled back independently of each other.

Half the apps were already in Docker, but the Dockerfiles looked… creative. Typical issues:

# ❌ A typical Dockerfile we found (PHP backend)
FROM php:8.1-apache
COPY . /var/www/html/
RUN apt-get update && apt-get install -y \
    git curl zip unzip libpng-dev libonig-dev \
    libxml2-dev libzip-dev nodejs npm
RUN composer install
RUN npm install && npm run build
EXPOSE 80

What’s wrong?

  • Frontend and backend in one image β€” no independent scaling
  • One huge image (~1.2 GB) with build tools in production
  • No multi-stage build
  • No .dockerignore β€” node_modules and .git end up in the image
  • apt-get update without --no-install-recommends

After splitting into separate containers and optimizing:

# βœ… Backend (PHP-FPM) β€” separate container
FROM composer:2 AS deps
WORKDIR /build
COPY composer.json composer.lock ./
RUN composer install --no-dev --no-scripts --prefer-dist
COPY . .
RUN composer dump-autoload --optimize

FROM php:8.1-fpm-alpine
RUN apk add --no-cache libpng libxml2 libzip oniguruma
COPY --from=deps /build /var/www/html
USER www-data
EXPOSE 9000
# βœ… Frontend (Node.js) β€” separate container
FROM node:20-alpine AS build
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

FROM nginx:alpine
COPY --from=build /app/dist /usr/share/nginx/html
COPY nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80

Results of splitting and optimization:

MetricBefore (monolithic)After (separate containers)
Image size~1.2 GB (single)~150 MB backend + ~30 MB frontend
Independent scalingImpossibleYes β€” frontend and backend separately
Independent deploymentsImpossibleYes β€” deploying backend doesn’t restart frontend
Build time (cached)~6 min~30 sec (backend) + ~20 sec (frontend)
Build tools in prodgit, npm, composernone

Step 4: GitOps with ArgoCD

Instead of imperative deployments (kubectl apply, bash scripts), we implemented a declarative approach with ArgoCD.

The GitOps principle is simple:

The Git repository is the single source of truth for infrastructure state. No manual changes in the cluster.

ArgoCD continuously compares desired state (Git) with actual state (cluster) and automatically synchronizes differences.

ArgoCD repository structure

argocd-repo/
β”œβ”€β”€ base/                    # Shared Helm Chart configuration
β”‚   β”œβ”€β”€ Chart.yaml
β”‚   β”œβ”€β”€ values.yaml          # Default values
β”‚   └── templates/
β”‚       β”œβ”€β”€ deployment.yaml
β”‚       β”œβ”€β”€ service.yaml
β”‚       β”œβ”€β”€ ingress.yaml
β”‚       β”œβ”€β”€ hpa.yaml
β”‚       └── sealed-secret.yaml
β”‚
└── overlays/
    β”œβ”€β”€ dev/
    β”‚   β”œβ”€β”€ app1/
    β”‚   β”‚   └── values.yaml  # image.tag, replicas, resources
    β”‚   β”œβ”€β”€ app2/
    β”‚   β”‚   └── values.yaml
    β”‚   └── ...
    β”œβ”€β”€ rc/
    β”‚   β”œβ”€β”€ app1/
    β”‚   β”‚   └── values.yaml
    β”‚   β”œβ”€β”€ app2/
    β”‚   β”‚   └── values.yaml
    β”‚   └── ...
    └── prod/
        β”œβ”€β”€ app1/
        β”‚   └── values.yaml  # Production tags, HPA, higher limits
        β”œβ”€β”€ app2/
        β”‚   └── values.yaml
        └── ...

Each app on each environment = a separate values.yaml with overridden values. Shared base, differences in overlays.

Example overlay for PROD

# overlays/prod/app1/values.yaml
image:
  repository: registry.client.local/app1
  tag: "a1b2c3d"  # Commit SHA β€” replaced by CI

replicaCount: 3

resources:
  requests:
    cpu: 200m
    memory: 256Mi
  limits:
    cpu: 500m
    memory: 512Mi

hpa:
  enabled: true
  minReplicas: 3
  maxReplicas: 10
  targetCPU: 70

ingress:
  host: app1.client.pl
  tls: true

Step 5: universal Helm Chart

Instead of maintaining 60 separate charts, we created one universal Helm Chart covering the needs of all applications.

What the chart supports:

ComponentConfigurable
Deploymentreplicas, strategy, resources, probes, env, volumes
Servicetype, ports
Ingresshost, path, TLS, annotations
HPAmin/max replicas, targetCPU/Memory
Sealed Secretencrypted secrets per environment
ConfigMapapplication configuration
PDBPod Disruption Budget

Key decision: frontend and backend are separate deployments, each using the same chart. Runtime type is controlled by a single switch:

# Backend (PHP-FPM)
runtime: php
phpFpm:
  enabled: true

# Frontend (Node.js + nginx)
runtime: node
nginx:
  enabled: true

# Frontend (static build served by nginx)
runtime: static
nginx:
  enabled: true

One chart, ~60 applications (separate frontend + backend containers), zero configuration duplication.

Step 6: Sealed Secrets β€” safe secrets in Git

Managing secrets in GitOps is a classic problem: how do you keep passwords and keys in a repo without risking a leak?

We chose Sealed Secrets by Bitnami:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    kubeseal     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Secret.yaml β”‚ ──────────────► β”‚ SealedSecret.yaml β”‚ ──► Git repo
β”‚  (plaintext) β”‚    encrypt      β”‚   (encrypted)     β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                 β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                          β”‚
                                          β–Ό
                                 β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                                 β”‚  Sealed Secrets   β”‚
                                 β”‚  Controller (K8s) β”‚
                                 β”‚  decrypt β†’ Secret β”‚
                                 β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Flow:

  1. Developer creates a Secret with sensitive data
  2. kubeseal encrypts it with the controller’s public key
  3. The encrypted SealedSecret goes to Git β€” safely
  4. The controller in the cluster decrypts and creates a regular Secret

Important: A Sealed Secret is encrypted per namespace and per cluster. Even if someone copies it to another namespace, the controller will refuse to decrypt it.

Step 7: CI/CD pipeline in GitLab

Each application has a GitLab CI pipeline implementing the full cycle:

stages:
  - build
  - test
  - push
  - deploy

build:
  stage: build
  script:
    - docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA .

test:
  stage: test
  script:
    - docker run $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA ./run-tests.sh

push:
  stage: push
  script:
    - docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA

deploy:
  stage: deploy
  script:
    - >
      sed -i "s|tag:.*|tag: \"${CI_COMMIT_SHA}\"|"
      argocd-repo/overlays/${TARGET_ENV}/${APP_NAME}/values.yaml
    - cd argocd-repo
    - git add . && git commit -m "deploy ${APP_NAME} ${CI_COMMIT_SHA}"
    - git push
  rules:
    - if: $CI_COMMIT_BRANCH == "dev"
      variables:
        TARGET_ENV: dev
    - if: $CI_COMMIT_BRANCH == "master"
      variables:
        TARGET_ENV: rc
    - if: $CI_COMMIT_TAG
      variables:
        TARGET_ENV: prod

Environment mapping:

Git eventEnvironmentAutomatic?
Push to devDEVYes
Push to masterRCYes
New tagPRODYes

Every image is tagged with the commit SHA β€” not latest, not v1.2.3, but the exact hash. You always know what code is running on a given environment:

# What commit is on PROD?
$ grep "tag:" overlays/prod/app1/values.yaml
  tag: "a1b2c3d4e5f6"

$ git log --oneline a1b2c3d4e5f6
a1b2c3d fix: resolve payment gateway timeout

Step 8: what about the databases?

We deliberately left the databases on their existing VMs.

Why not migrate databases to Kubernetes?

ArgumentOur assessment
”K8s is for stateless”That’s a myth β€” but stateful on K8s requires a solid operator
Client has proven VMsBackup, monitoring, failover β€” everything works
Migration riskHigh, while the benefit in this context is marginal
Operator deployment costTime + expertise that doesn’t need to be built right now

Applications in Kubernetes connect to databases over the internal network. ExternalName Service or direct endpoint β€” simple and reliable.

πŸ’‘ Not every piece of infrastructure needs to go into Kubernetes. Sometimes the best architectural decision is the one you don’t make.

πŸ“Š What a typical deployment looks like β€” end to end

Developer pushes to dev branch
         β”‚
         β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  GitLab CI/CD   β”‚
β”‚  build β†’ test   β”‚
β”‚  push β†’ deploy  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜
         β”‚ Replaces tag in values.yaml
         β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  ArgoCD repo    β”‚
β”‚  (Git commit)   β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜
         β”‚ ArgoCD detects drift
         β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  ArgoCD sync    β”‚
β”‚  (K8s apply)    β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜
         β”‚ Rolling update
         β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  New version    β”‚
β”‚  running on DEV β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Rollback? Revert the commit in the ArgoCD repo. ArgoCD detects the change and restores the previous version. Zero SSH, zero panic, full history in Git.

βœ… Final result

AspectBeforeAfter
Deployment time15-45 min (manual)~5 min (automated)
RollbackSSH + backup (30+ min)Git revert (~2 min)
Change historyNoneComplete (Git log)
RepeatabilityLow100% β€” same process on every environment
New team member onboardingWeeksDays β€” entire stack in code
Environment separationPartialFull β€” separate clusters
New applicationConfigure server from scratchNew overlay + values.yaml

🎯 Takeaways

Not every transformation needs to be a revolution. Sometimes it’s about organizing what already exists into a coherent, repeatable process.

Three principles that proved themselves in this project:

  1. GitOps as the foundation β€” Git is the source of truth. Not a person with SSH access.
  2. One chart, many apps β€” standardization reduces complexity by an order of magnitude.
  3. Pragmatism over purism β€” databases stayed on VMs because it made sense. We don’t migrate for the sake of migrating.

Sixty applications. Three environments. One coherent process. That’s what software deployment should look like.


Facing a similar infrastructure challenge? Let’s talk about your case.

Author: Wojciech Sokola @ OPSWRO Team