K8s

In advance, I want to say that I’m still quite new to the whole Kubernetes world, so please be understanding if there are better ways to do things or if I did something wrong. :)

In autumn 2024, I heard about a new managed Kubernetes service here in Switzerland by my preferred hosting company, Infomaniak. The service was not released yet, but I already started planning the migration of my whole infrastructure. At that time, it was distributed across several VPSs, a Kubernetes cluster at DigitalOcean, and some services running at home. I started searching for Helm charts or wrote my own if none were available.

After Infomaniak finally released their managed Kubernetes service, I could breathe a sigh of relief, since the pricing was more or less what I expected. My main fear was that it would be as expensive as other providers and therefore not really worth it for a private person to run a cluster. Fortunately, Infomaniak released a free control plane, and you only pay for the worker nodes via their public cloud pricing.

I opened their calculator and added two worker nodes with 4 CPUs and 8 GB of memory each. The price was reasonable, and I could easily upgrade later to the next node size with 4 CPUs and 16 GB of memory. I also added a Load Balancer and a public IPv4 address to my shopping cart and started setting up my new Kubernetes cluster.

Security Notes

At this point, it is important to mention that this article describes a personal Kubernetes setup and intentionally omits some production-grade hardening steps.

⚠️ Important considerations:

  • The Kubernetes Dashboard should never be exposed publicly.
  • cluster-admin roles are used here for demonstration purposes only.
  • Kubernetes Secrets are base64-encoded, not encrypted.
  • All credentials shown are placeholders.
  • Real secrets are managed via the 1Password Kubernetes Operator.

If you plan to use similar patterns in production, additional measures such as RBAC minimization, NetworkPolicies, secret encryption at rest, and private cluster access are strongly recommended.

Services

As mentioned earlier, I had several different VPSs and other pieces of infrastructure. For example, I was running InvoicePlane, Shlink, and more (services to be added later). My goal was to migrate all of them to the new Kubernetes cluster.

For services like Shlink, where the UI is not protected by default, I wanted to add an authentication layer in front of them. Because of that, I decided to use Zitadel. I had already used it in other projects and was always very satisfied with it.

I then started writing my own Helm charts where no official or community-maintained ones existed. The first service I deployed on the cluster was the NGINX Ingress Controller, which was quite simple (see the example below). After that, I installed cert-manager, which was also easy to deploy. Then I created a custom resource to define two ClusterIssuers for issuing TLS certificates.

values.yaml for the ingress-nginx Helm chart

1
2
3
4
5
6
7
8
controller:
  admissionWebhooks:
    enabled: true
    patch:
      enabled: true

  service:
    externalTrafficPolicy: Local

values.yaml for the cert-manager Helm chart

1
2
3
4
5
6
7
cert-manager:
  enabled: true
  crds:
    enabled: true

issuer:
  email: hi@mydomain.com

ClusterIssuer

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-staging
spec:
  acme:
    server: https://acme-staging-v02.api.letsencrypt.org/directory
    email: hi@mydomain.com
    privateKeySecretRef:
      name: letsencrypt-staging
    solvers:
      - http01:
          ingress:
            ingressClassName: nginx
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: hi@mydomain.com
    privateKeySecretRef:
      name: letsencrypt-prod
    solvers:
      - http01:
          ingress:
            ingressClassName: nginx

After having those two issuers in place, I wanted to deploy the Kubernetes Dashboard. Since I had already deployed it on my old DigitalOcean cluster, I remembered it as a single pod (or at least it was back then). This time, however, I ended up with several pods, proxies, and additional components.

I initially tried to simplify the deployment because I wanted to keep it as minimal as possible. After spending several hours on this, I decided to use the default setup instead—which, in most cases, is the better approach anyway. And who would have thought: it worked perfectly.

I opened my browser, navigated to the Kubernetes Dashboard, typed thisisunsafe to bypass Chrome’s SSL warning, and there it was—my dashboard. I then noticed that I could no longer log in using my kubeconfig, as I was able to do with the old dashboard. Instead, I created a service account and issued a token for authentication.

Typing thisisunsafe should only be done on trusted networks, as it bypasses important security warnings. In a production environment, you should always issue a valid TLS certificate for the dashboard to avoid these warnings entirely.

Apply the following YAML to create a service account with the cluster-admin role:

Warning: Granting cluster-admin permissions to a service account gives it full access to the entire cluster. Make sure you understand the security implications before proceeding. In production environments, more restrictive roles are strongly recommended.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kube-ds-admin
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kube-ds-admin-role-binding
subjects:
  - kind: ServiceAccount
    name: kube-ds-admin
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io

Then generate a token for the service account:

Warning: The token generated below is valid for 1 hour. Adjust the --duration flag as needed for your use case. Avoid long-lived tokens in production environments due to increased security risks.

If you generate a long-lived token and want to revoke it later, you can do so by deleting the service account or the associated secret.

1
kubectl create token kube-ds-admin -n kube-system --duration=1h

Some time ago, I noticed that 1Password provides its own Kubernetes operator, which allows syncing specific 1Password items directly into a Kubernetes cluster. My curiosity and the security benefits convinced me to try it out.

During the setup, I ran into a very annoying issue. Everything seemed to be configured correctly, but the secret simply would not sync. After spending much more time on this than I want to admit, I finally realized that I had missed setting the operator.token.value in the Helm command. I had only attached the credentials.json file. Even though this is clearly documented in the 1Password documentation, I still managed to miss it.

Once this was fixed, everything worked as expected. I was able to sync credentials into Kubernetes. I created a dummy application that displayed the synced secret, and when I changed the credential in 1Password, it was automatically synced to Kubernetes and the pod restarted.

Before discovering the 1Password operator, my plan was to encrypt secrets using Sealed Secrets. That way, I could store the encrypted files directly in my repository.

Next, I had to figure out how to deploy probably the most important services of all: databases. Unfortunately, I need several of them—MySQL, PostgreSQL, and MongoDB.

I decided to start with PostgreSQL because I need it for Zitadel. My first choice was the Zalando Postgres Operator, but after installing it, I ran into issues with backups and was generally not convinced by its behavior. I therefore decided to move on and try the CloudNativePG (CNPG) operator.

At first, I was very happy with it—it worked well and felt clean. However, later on, I experienced some issues with the initDB and backup functionality. My main problem with initDB was that I did not really want it. I prefer to store the database custom resource inside the service directory. For example, the database for Zitadel should live in the Zitadel folder.

The reason for this is that I want to deploy everything using ArgoCD in the future. In my opinion, it makes the most sense to keep related resources in one directory. Unfortunately, I did not find a way to fully disable the initDB behavior.

At some point, I decided that it might be acceptable to leave the default behavior in place. Still, it felt a bit unclear to generate a secret and just leave it there without a proper backup—especially since the database would probably stay empty forever.

As a workaround, I added a 1Password custom resource to the database directory and synced the default initDB credentials from 1Password.

Note: Do not store secrets in your Git repository in plain text. Since the values are only base64-encoded, they can be easily decoded. Use a secret management solution like 1Password, HashiCorp Vault, or similar tools to manage secrets securely. Another option is to use Sealed Secrets to encrypt secrets before committing them to your repository.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
# This is only the generated secret format expected by CNPG.
# In my setup, the secret is synced from 1Password to Kubernetes.

apiVersion: v1
kind: Secret
metadata:
  name: postgres-initdb-credentials
  namespace: my-database-cluster
data:
  username: bXktZGItdXNlcm5hbWU= # base64-encoded: `my-db-username`
  password: bXktc3VwZXItc2VjdXJlLWRiLXBhc3N3b3k= # base64-encoded: `my-super-secure-db-password`
  • write about MongoDB and MySQL
  • generalize and summarize the service deployment
  • details about Zitadel will be covered in a separate article