From 60 Apps on Apache to Kubernetes - Infrastructure Migration in Wroclaw
How we migrated 60 applications from Apache2 and Docker to Kubernetes RKE2 with GitOps (ArgoCD), Helm Chart, and Sealed Secrets. A deployment case study.
Sixty applications. Half on Docker, half on Apache2. Three environments. Zero deployment repeatability. That was the starting point of one of our projects in Wroclaw, Poland β and exactly why the client reached out to us.
π TL;DR
- Initial state: ~60 apps (separate frontend + backend containers), PHP + Node.js, mix of Apache2 and Docker, databases on shared VMs
- Solution: RKE2 (cluster per environment) + MetalLB + NGINX Ingress Controller + ArgoCD (GitOps) + universal Helm Chart + Sealed Secrets
- CI/CD: GitLab pipeline β build β test β push β deploy (branch mapping: devβDEV, masterβRC, tagβPROD)
- Result: repeatable deployments, one-commit rollbacks, full auditability
β οΈ Problem: infrastructure that βsort of worksβ
The client ran their own data center with virtualization. Infrastructure grew organically β new app, new Apache vhost, copy-paste config from the previous project. The result?
| Element | Initial state |
|---|---|
| Applications | ~60 (separate frontend + backend containers) |
| Technologies | PHP (backend), Node.js (frontend) |
| Runtime | ~50% Docker, ~50% Apache2 |
| Databases | On shared VMs |
| Deployments | Mix of manual and semi-automated |
| Rollback | SSH + restoring files from backup |
The problem wasnβt that apps didnβt work. They worked. But every deployment looked different. Rollback was a surgical operation. Onboarding a new team member meant weeks of learning context. And one bad production deploy could ruin a Friday afternoon.
What hurt the most?
- No repeatability β deploy to DEV looked different than deploy to PROD
- No change history β βwho deployed this?β was a question without an answer
- Manual rollbacks β reverting changes required SSH access and nerves of steel
- Scattered configuration β Apache vhosts,
.envfiles, variables in different places - No environment separation β DEV could affect RC
π― Client expectations
Requirements were specific and realistic:
- Kubernetes cluster β a stable, scalable platform
- Containerization support β migrating apps from Apache2 to containers
- Dockerfile optimization β smaller images, faster builds, fewer layers
- GitLab CI/CD β automated pipeline from code to production
- Easy deploy and rollback β one-click deployment, stress-free rollbacks
Sounds like a wish list? Maybe. But every one of those wishes can be fulfilled when the architecture is thought through from the ground up.
π§ Solution: architecture step by step
Step 1: RKE2 β a cluster per environment
We chose RKE2 β a lightweight, CNCF-certified Kubernetes distribution from Rancher with a built-in CIS hardening profile.
Key architectural decision: separate cluster per environment.
βββββββββββββββ βββββββββββββββ βββββββββββββββ
β RKE2 DEV β β RKE2 RC β β RKE2 PROD β
β β β β β β
β 3 nodes β β 3 nodes β β 5 nodes β
β dev apps β β rc apps β β prod apps β
βββββββββββββββ βββββββββββββββ βββββββββββββββ
Why separate clusters instead of namespaces?
| Approach | Pros | Cons |
|---|---|---|
| Namespaces | Cheaper, simpler | Shared resources, blast radius |
| Separate clusters | Full isolation, independent upgrades | Higher infra cost |
We chose isolation. A failure on DEV doesnβt touch production. A Kubernetes upgrade on RC doesnβt risk PROD. Each environment has its own lifecycle.
π‘ Why RKE2 over kubeadm or k3s? The client required CIS hardening out of the box. RKE2 provides this by default. Additionally, Rancherβs stable update channel gave the client confidence they wouldnβt be stuck with an outdated cluster.
Step 2: MetalLB + NGINX Ingress Controller β the network layer
The client ran their own DC with no cloud load balancer available. We needed a way to expose services externally. The answer was MetalLB in L2 mode + NGINX Ingress Controller.
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β Client DC β
β β
β ββββββββββββ βββββββββββββββββββββββββββββ β
β β MetalLB βββββΊβ NGINX Ingress Controller β β
β β (VIP L2) β β (HTTP/HTTPS routing) β β
β ββββββββββββ βββββββββββ¬ββββββββββββββββββ β
β β β
β βββββββββββββββΌββββββββββββββ β
β βΌ βΌ βΌ β
β ββββββββββ ββββββββββ ββββββββββ β
β β app1 β β app2 β β app3 β β
β βfrontendβ βbackend β βfrontendβ β
β ββββββββββ ββββββββββ ββββββββββ β
β β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
MetalLB assigns IP addresses from a pool defined in its configuration. In a bare-metal environment, this is the only way to get type: LoadBalancer without a cloud provider:
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: default-pool
namespace: metallb-system
spec:
addresses:
- 10.0.10.100-10.0.10.120
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: default
namespace: metallb-system
NGINX Ingress Controller receives a VIP from MetalLB and routes HTTP/HTTPS traffic to the appropriate services based on Ingress rules:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app1-frontend
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "50m"
spec:
ingressClassName: nginx
tls:
- hosts:
- app1.client.pl
secretName: app1-tls
rules:
- host: app1.client.pl
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: app1-frontend
port:
number: 80
- path: /api
pathType: Prefix
backend:
service:
name: app1-backend
port:
number: 9000
π‘ Why MetalLB L2 instead of BGP? The clientβs network didnβt support BGP on access switches. L2 works plug-and-play β just a pool of free IPs in the same network segment. For clusters with a dozen or so nodes, thatβs more than enough.
Step 3: containerizing applications
Key architectural decision: frontend and backend are separate containers. Each has its own Dockerfile, its own image, its own deployment in Kubernetes. This allows them to be scaled, deployed, and rolled back independently of each other.
Half the apps were already in Docker, but the Dockerfiles looked⦠creative. Typical issues:
# β A typical Dockerfile we found (PHP backend)
FROM php:8.1-apache
COPY . /var/www/html/
RUN apt-get update && apt-get install -y \
git curl zip unzip libpng-dev libonig-dev \
libxml2-dev libzip-dev nodejs npm
RUN composer install
RUN npm install && npm run build
EXPOSE 80
Whatβs wrong?
- Frontend and backend in one image β no independent scaling
- One huge image (~1.2 GB) with build tools in production
- No multi-stage build
- No
.dockerignoreβ node_modules and.gitend up in the image apt-get updatewithout--no-install-recommends
After splitting into separate containers and optimizing:
# β
Backend (PHP-FPM) β separate container
FROM composer:2 AS deps
WORKDIR /build
COPY composer.json composer.lock ./
RUN composer install --no-dev --no-scripts --prefer-dist
COPY . .
RUN composer dump-autoload --optimize
FROM php:8.1-fpm-alpine
RUN apk add --no-cache libpng libxml2 libzip oniguruma
COPY --from=deps /build /var/www/html
USER www-data
EXPOSE 9000
# β
Frontend (Node.js) β separate container
FROM node:20-alpine AS build
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
FROM nginx:alpine
COPY --from=build /app/dist /usr/share/nginx/html
COPY nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
Results of splitting and optimization:
| Metric | Before (monolithic) | After (separate containers) |
|---|---|---|
| Image size | ~1.2 GB (single) | ~150 MB backend + ~30 MB frontend |
| Independent scaling | Impossible | Yes β frontend and backend separately |
| Independent deployments | Impossible | Yes β deploying backend doesnβt restart frontend |
| Build time (cached) | ~6 min | ~30 sec (backend) + ~20 sec (frontend) |
| Build tools in prod | git, npm, composer | none |
Step 4: GitOps with ArgoCD
Instead of imperative deployments (kubectl apply, bash scripts), we implemented a declarative approach with ArgoCD.
The GitOps principle is simple:
The Git repository is the single source of truth for infrastructure state. No manual changes in the cluster.
ArgoCD continuously compares desired state (Git) with actual state (cluster) and automatically synchronizes differences.
ArgoCD repository structure
argocd-repo/
βββ base/ # Shared Helm Chart configuration
β βββ Chart.yaml
β βββ values.yaml # Default values
β βββ templates/
β βββ deployment.yaml
β βββ service.yaml
β βββ ingress.yaml
β βββ hpa.yaml
β βββ sealed-secret.yaml
β
βββ overlays/
βββ dev/
β βββ app1/
β β βββ values.yaml # image.tag, replicas, resources
β βββ app2/
β β βββ values.yaml
β βββ ...
βββ rc/
β βββ app1/
β β βββ values.yaml
β βββ app2/
β β βββ values.yaml
β βββ ...
βββ prod/
βββ app1/
β βββ values.yaml # Production tags, HPA, higher limits
βββ app2/
β βββ values.yaml
βββ ...
Each app on each environment = a separate values.yaml with overridden values. Shared base, differences in overlays.
Example overlay for PROD
# overlays/prod/app1/values.yaml
image:
repository: registry.client.local/app1
tag: "a1b2c3d" # Commit SHA β replaced by CI
replicaCount: 3
resources:
requests:
cpu: 200m
memory: 256Mi
limits:
cpu: 500m
memory: 512Mi
hpa:
enabled: true
minReplicas: 3
maxReplicas: 10
targetCPU: 70
ingress:
host: app1.client.pl
tls: true
Step 5: universal Helm Chart
Instead of maintaining 60 separate charts, we created one universal Helm Chart covering the needs of all applications.
What the chart supports:
| Component | Configurable |
|---|---|
| Deployment | replicas, strategy, resources, probes, env, volumes |
| Service | type, ports |
| Ingress | host, path, TLS, annotations |
| HPA | min/max replicas, targetCPU/Memory |
| Sealed Secret | encrypted secrets per environment |
| ConfigMap | application configuration |
| PDB | Pod Disruption Budget |
Key decision: frontend and backend are separate deployments, each using the same chart. Runtime type is controlled by a single switch:
# Backend (PHP-FPM)
runtime: php
phpFpm:
enabled: true
# Frontend (Node.js + nginx)
runtime: node
nginx:
enabled: true
# Frontend (static build served by nginx)
runtime: static
nginx:
enabled: true
One chart, ~60 applications (separate frontend + backend containers), zero configuration duplication.
Step 6: Sealed Secrets β safe secrets in Git
Managing secrets in GitOps is a classic problem: how do you keep passwords and keys in a repo without risking a leak?
We chose Sealed Secrets by Bitnami:
ββββββββββββββββ kubeseal ββββββββββββββββββββ
β Secret.yaml β βββββββββββββββΊ β SealedSecret.yaml β βββΊ Git repo
β (plaintext) β encrypt β (encrypted) β
ββββββββββββββββ ββββββββββββββββββββ
β
βΌ
ββββββββββββββββββββ
β Sealed Secrets β
β Controller (K8s) β
β decrypt β Secret β
ββββββββββββββββββββ
Flow:
- Developer creates a
Secretwith sensitive data kubesealencrypts it with the controllerβs public key- The encrypted
SealedSecretgoes to Git β safely - The controller in the cluster decrypts and creates a regular
Secret
Important: A Sealed Secret is encrypted per namespace and per cluster. Even if someone copies it to another namespace, the controller will refuse to decrypt it.
Step 7: CI/CD pipeline in GitLab
Each application has a GitLab CI pipeline implementing the full cycle:
stages:
- build
- test
- push
- deploy
build:
stage: build
script:
- docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA .
test:
stage: test
script:
- docker run $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA ./run-tests.sh
push:
stage: push
script:
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
deploy:
stage: deploy
script:
- >
sed -i "s|tag:.*|tag: \"${CI_COMMIT_SHA}\"|"
argocd-repo/overlays/${TARGET_ENV}/${APP_NAME}/values.yaml
- cd argocd-repo
- git add . && git commit -m "deploy ${APP_NAME} ${CI_COMMIT_SHA}"
- git push
rules:
- if: $CI_COMMIT_BRANCH == "dev"
variables:
TARGET_ENV: dev
- if: $CI_COMMIT_BRANCH == "master"
variables:
TARGET_ENV: rc
- if: $CI_COMMIT_TAG
variables:
TARGET_ENV: prod
Environment mapping:
| Git event | Environment | Automatic? |
|---|---|---|
Push to dev | DEV | Yes |
Push to master | RC | Yes |
| New tag | PROD | Yes |
Every image is tagged with the commit SHA β not latest, not v1.2.3, but the exact hash. You always know what code is running on a given environment:
# What commit is on PROD?
$ grep "tag:" overlays/prod/app1/values.yaml
tag: "a1b2c3d4e5f6"
$ git log --oneline a1b2c3d4e5f6
a1b2c3d fix: resolve payment gateway timeout
Step 8: what about the databases?
We deliberately left the databases on their existing VMs.
Why not migrate databases to Kubernetes?
| Argument | Our assessment |
|---|---|
| βK8s is for statelessβ | Thatβs a myth β but stateful on K8s requires a solid operator |
| Client has proven VMs | Backup, monitoring, failover β everything works |
| Migration risk | High, while the benefit in this context is marginal |
| Operator deployment cost | Time + expertise that doesnβt need to be built right now |
Applications in Kubernetes connect to databases over the internal network. ExternalName Service or direct endpoint β simple and reliable.
π‘ Not every piece of infrastructure needs to go into Kubernetes. Sometimes the best architectural decision is the one you donβt make.
π What a typical deployment looks like β end to end
Developer pushes to dev branch
β
βΌ
βββββββββββββββββββ
β GitLab CI/CD β
β build β test β
β push β deploy β
ββββββββββ¬βββββββββ
β Replaces tag in values.yaml
βΌ
βββββββββββββββββββ
β ArgoCD repo β
β (Git commit) β
ββββββββββ¬βββββββββ
β ArgoCD detects drift
βΌ
βββββββββββββββββββ
β ArgoCD sync β
β (K8s apply) β
ββββββββββ¬βββββββββ
β Rolling update
βΌ
βββββββββββββββββββ
β New version β
β running on DEV β
βββββββββββββββββββ
Rollback? Revert the commit in the ArgoCD repo. ArgoCD detects the change and restores the previous version. Zero SSH, zero panic, full history in Git.
β Final result
| Aspect | Before | After |
|---|---|---|
| Deployment time | 15-45 min (manual) | ~5 min (automated) |
| Rollback | SSH + backup (30+ min) | Git revert (~2 min) |
| Change history | None | Complete (Git log) |
| Repeatability | Low | 100% β same process on every environment |
| New team member onboarding | Weeks | Days β entire stack in code |
| Environment separation | Partial | Full β separate clusters |
| New application | Configure server from scratch | New overlay + values.yaml |
π― Takeaways
Not every transformation needs to be a revolution. Sometimes itβs about organizing what already exists into a coherent, repeatable process.
Three principles that proved themselves in this project:
- GitOps as the foundation β Git is the source of truth. Not a person with SSH access.
- One chart, many apps β standardization reduces complexity by an order of magnitude.
- Pragmatism over purism β databases stayed on VMs because it made sense. We donβt migrate for the sake of migrating.
Sixty applications. Three environments. One coherent process. Thatβs what software deployment should look like.
Facing a similar infrastructure challenge? Letβs talk about your case.
Author: Wojciech Sokola @ OPSWRO Team