Protect services with Zitadel and OAuth2-Proxy
I have several services deployed and exposed on my Kubernetes cluster. Some of them need to be publicly accessible, while others should remain private. On top of that, I don’t fully trust the built-in security of some apps, so I decided to add an extra layer of protection with OAuth2-Proxy.
Setup
I already had a Zitadel instance running, along with an Ingress controller and Cert-Manager.
The first service I wanted to secure was the Kubernetes Dashboard. I deployed OAuth2-Proxy and configured it. At first, I mistakenly set up an internal upstream. This worked fine for the dashboard itself, but not for other apps - since OAuth2-Proxy only supports a single upstream.
That’s when I realized I had configured OAuth2-Proxy as a reverse proxy, instead of just as an authentication layer. I removed the upstream setting and instead defined allowed domains.
One important detail: when setting domain values, you must include the leading dot (e.g., .domain.tld). Without it,
subdomains won’t be allowed through OAuth2-Proxy.
Here’s my final configuration:
1provider = "oidc"
2oidc_issuer_url = "https://zitadel.domain-a.tld"
3scope = "openid email profile"
4code_challenge_method = "S256" # Required for OIDC providers that use PKCE
5pass_access_token = true
6skip_provider_button = false
7ssl_insecure_skip_verify = false
8proxy_prefix = "/oauth2"
9
10email_domains = [".domain-a.tld", ".domain-b.tld", "domain-a.tld", "domain-b.tld"]
11whitelist_domains = [".domain-a.tld", ".domain-b.tld", "domain-a.tld", "domain-b.tld"]
12cookie_domains = [".domain-a.tld", ".domain-b.tld", "domain-a.tld", "domain-b.tld"]
13cookie_name = "_oauth2_proxy"
14cookie_expire = "168h"
15cookie_secure = true
16cookie_httponly = true
17cookie_samesite = "lax"
18cookie_csrf_per_request = true
19cookie_csrf_expire = "5m"
20set_xauthrequest = true
21pass_user_headers = true
22pass_authorization_header = true
23pass_basic_auth = false
24reverse_proxy = false
25show_debug_on_error = false
You also need to provide the following environment variables. While you could set them directly in the config file, I prefer to keep them in 1Password and sync them into my namespace as Kubernetes secrets — for better security and easier management:
1env:
2 - name: OAUTH2_PROXY_CLIENT_ID
3 valueFrom:
4 secretKeyRef:
5 name: oauth2-proxy-credentials
6 key: client-id
7 - name: OAUTH2_PROXY_CLIENT_SECRET
8 valueFrom:
9 secretKeyRef:
10 name: oauth2-proxy-credentials
11 key: client-secret
12 - name: OAUTH2_PROXY_COOKIE_SECRET
13 valueFrom:
14 secretKeyRef:
15 name: oauth2-proxy-credentials
16 key: cookie-secret
Note: If you’re using Let’s Encrypt staging certificates, you’ll need to set ssl_insecure_skip_verify = true.
Once you switch to production certificates, you can (and should) set it back to false.
Adding OAuth2-Proxy to Ingress
Once OAuth2-Proxy was properly configured and I could log in with my Zitadel account, I integrated it with my Ingress. After a bit of trial and error, I ended up with this setup, which now works reliably across all my services:
1kubernetes.io/ingress.class: "nginx"
2cert-manager.io/cluster-issuer: "letsencrypt-prod"
3nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
4nginx.ingress.kubernetes.io/ssl-redirect: "true"
5nginx.ingress.kubernetes.io/auth-url: "https://oauth-proxy.domain.tld/oauth2/auth"
6nginx.ingress.kubernetes.io/auth-signin: >-
7 https://oauth-proxy.domain.tld/oauth2/start?rd=$scheme://$host$request_uri
Depending on the service, I sometimes add these extra annotations:
1nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" # or "HTTP" if the service doesn’t use TLS
2nginx.ingress.kubernetes.io/proxy-buffer-size: "8k"
This setup is based on the official Ingress NGINX OAuth2-Proxy guide, with a few tweaks to fit my use case.