oc-k8s/opencloud/charts/traefik/EXAMPLES.md
2024-11-28 11:09:51 +01:00

24 KiB
Raw Blame History

Install as a DaemonSet

Default install is using a Deployment but it's possible to use DaemonSet

deployment:
  kind: DaemonSet

Configure traefik Pod parameters

Extending /etc/hosts records

In some specific cases, you'll need to add extra records to the /etc/hosts file for the Traefik containers. You can configure it using hostAliases:

deployment:
  hostAliases:
  - ip: "127.0.0.1" # this is an example
    hostnames:
     - "foo.local"
     - "bar.local"

Extending DNS config

In order to configure additional DNS servers for your traefik pod, you can use dnsConfig option:

deployment:
  dnsConfig:
    nameservers:
      - 192.0.2.1 # this is an example
    searches:
      - ns1.svc.cluster-domain.example
      - my.dns.search.suffix
    options:
      - name: ndots
        value: "2"
      - name: edns0

Install in a dedicated namespace, with limited RBAC

Default install is using Cluster-wide RBAC but it can be restricted to target namespace.

rbac:
  namespaced: true

Install with auto-scaling

When enabling HPA to adjust replicas count according to CPU Usage, you'll need to set resources and nullify replicas.

deployment:
  replicas: null
resources:
  requests:
    cpu: "100m"
    memory: "50Mi"
  limits:
    cpu: "300m"
    memory: "150Mi"
autoscaling:
  enabled: true
  maxReplicas: 2
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 80

Access Traefik dashboard without exposing it

This Chart does not expose the Traefik local dashboard by default. It's explained in upstream documentation why:

Enabling the API in production is not recommended, because it will expose all configuration elements, including sensitive data.

It says also:

In production, it should be at least secured by authentication and authorizations.

Thus, there are multiple ways to expose the dashboard. For instance, after enabling the creation of dashboard IngressRoute in the values:

ingressRoute:
  dashboard:
    enabled: true

The traefik admin port can be forwarded locally:

kubectl port-forward $(kubectl get pods --selector "app.kubernetes.io/name=traefik" --output=name) 8080:8080

This command makes the dashboard accessible on the url: http://127.0.0.1:8080/dashboard/

Publish and protect Traefik Dashboard with basic Auth

To expose the dashboard in a secure way as recommended in the documentation, it may be useful to override the router rule to specify a domain to match, or accept requests on the root path (/) in order to redirect them to /dashboard/.

# Create an IngressRoute for the dashboard
ingressRoute:
  dashboard:
    enabled: true
    # Custom match rule with host domain
    matchRule: Host(`traefik-dashboard.example.com`)
    entryPoints: ["websecure"]
    # Add custom middlewares : authentication and redirection
    middlewares:
      - name: traefik-dashboard-auth

# Create the custom middlewares used by the IngressRoute dashboard (can also be created in another way).
# /!\ Yes, you need to replace "changeme" password with a better one. /!\
extraObjects:
  - apiVersion: v1
    kind: Secret
    metadata:
      name: traefik-dashboard-auth-secret
    type: kubernetes.io/basic-auth
    stringData:
      username: admin
      password: changeme

  - apiVersion: traefik.io/v1alpha1
    kind: Middleware
    metadata:
      name: traefik-dashboard-auth
    spec:
      basicAuth:
        secret: traefik-dashboard-auth-secret

Publish and protect Traefik Dashboard with an Ingress

To expose the dashboard without IngressRoute, it's more complicated and less secure. You'll need to create an internal Service exposing Traefik API with special traefik entrypoint. This internal Service can be created from an other tool, with the extraObjects section or using custom services.

You'll need to double check:

  1. Service selector with your setup.
  2. Middleware annotation on the ingress, default should be replaced with traefik's namespace
ingressRoute:
  dashboard:
    enabled: false
additionalArguments:
- "--api.insecure=true"
# Create the service, middleware and Ingress used to expose the dashboard (can also be created in another way).
# /!\ Yes, you need to replace "changeme" password with a better one. /!\
extraObjects:
  - apiVersion: v1
    kind: Service
    metadata:
      name: traefik-api
    spec:
      type: ClusterIP
      selector:
        app.kubernetes.io/name: traefik
        app.kubernetes.io/instance: traefik-default
      ports:
      - port: 8080
        name: traefik
        targetPort: 8080
        protocol: TCP

  - apiVersion: v1
    kind: Secret
    metadata:
      name: traefik-dashboard-auth-secret
    type: kubernetes.io/basic-auth
    stringData:
      username: admin
      password: changeme

  - apiVersion: traefik.io/v1alpha1
    kind: Middleware
    metadata:
      name: traefik-dashboard-auth
    spec:
      basicAuth:
        secret: traefik-dashboard-auth-secret

  - apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: traefik-dashboard
      annotations:
        traefik.ingress.kubernetes.io/router.entrypoints: websecure
        traefik.ingress.kubernetes.io/router.middlewares: default-traefik-dashboard-auth@kubernetescrd
    spec:
      rules:
      - host: traefik-dashboard.example.com
        http:
          paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: traefik-api
                port:
                  name: traefik

Install on AWS

It can use native AWS support on Kubernetes

service:
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-type: nlb

Or if AWS LB controller is installed :

service:
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-type: nlb-ip

Install on GCP

A regional IP with a Service can be used

service:
  spec:
    loadBalancerIP: "1.2.3.4"

Or a global IP on Ingress

service:
  type: NodePort
extraObjects:
  - apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: traefik
      annotations:
        kubernetes.io/ingress.global-static-ip-name: "myGlobalIpName"
    spec:
      defaultBackend:
        service:
          name: traefik
          port:
            number: 80

Or a global IP on a Gateway with continuous HTTPS encryption.

ports:
  websecure:
    appProtocol: HTTPS # Hint for Google L7 load balancer
service:
  type: ClusterIP
extraObjects:
- apiVersion: gateway.networking.k8s.io/v1beta1
  kind: Gateway
  metadata:
    name: traefik
    annotations:
      networking.gke.io/certmap: "myCertificateMap"
  spec:
    gatewayClassName: gke-l7-global-external-managed
    addresses:
    - type: NamedAddress
      value: "myGlobalIPName"
    listeners:
    - name: https
      protocol: HTTPS
      port: 443
- apiVersion: gateway.networking.k8s.io/v1beta1
  kind: HTTPRoute
  metadata:
    name: traefik
  spec:
    parentRefs:
    - kind: Gateway
      name: traefik
    rules:
    - backendRefs:
      - name: traefik
        port: 443
- apiVersion: networking.gke.io/v1
  kind: HealthCheckPolicy
  metadata:
    name: traefik
  spec:
    default:
      config:
        type: HTTP
        httpHealthCheck:
          port: 8080
          requestPath: /ping
    targetRef:
      group: ""
      kind: Service
      name: traefik

Install on Azure

A static IP on a resource group can be used:

service:
  spec:
    loadBalancerIP: "1.2.3.4"
  annotations:
    service.beta.kubernetes.io/azure-load-balancer-resource-group: myResourceGroup

Here is a more complete example, using also native Let's encrypt feature of Traefik Proxy with Azure DNS:

persistence:
  enabled: true
  size: 128Mi
certificatesResolvers:
  letsencrypt:
    acme:
      email: "{{ letsencrypt_email }}"
      #caServer: https://acme-v02.api.letsencrypt.org/directory # Production server
      caServer: https://acme-staging-v02.api.letsencrypt.org/directory # Staging server
      dnsChallenge:
        provider: azuredns
      storage: /data/acme.json
env:
  - name: AZURE_CLIENT_ID
    value: "{{ azure_dns_challenge_application_id }}"
  - name: AZURE_CLIENT_SECRET
    valueFrom:
      secretKeyRef:
        name: azuredns-secret
        key: client-secret
  - name: AZURE_SUBSCRIPTION_ID
    value: "{{ azure_subscription_id }}"
  - name: AZURE_TENANT_ID
    value: "{{ azure_tenant_id }}"
  - name: AZURE_RESOURCE_GROUP
    value: "{{ azure_resource_group }}"
deployment:
  initContainers:
    - name: volume-permissions
      image: busybox:latest
      command: ["sh", "-c", "ls -la /; touch /data/acme.json; chmod -v 600 /data/acme.json"]
      volumeMounts:
      - mountPath: /data
        name: data
podSecurityContext:
  fsGroup: 65532
  fsGroupChangePolicy: "OnRootMismatch"
service:
  spec:
    type: LoadBalancer
  annotations:
    service.beta.kubernetes.io/azure-load-balancer-resource-group: "{{ azure_node_resource_group }}"
    service.beta.kubernetes.io/azure-pip-name: "{{ azure_resource_group }}"
    service.beta.kubernetes.io/azure-dns-label-name: "{{ azure_resource_group }}"
    service.beta.kubernetes.io/azure-allowed-ip-ranges: "{{ ip_range | join(',') }}"
extraObjects:
  - apiVersion: v1
    kind: Secret
    metadata:
      name: azuredns-secret
      namespace: traefik
    type: Opaque
    stringData:
      client-secret: "{{ azure_dns_challenge_application_secret }}"

Use an IngressClass

Default install comes with an IngressClass resource that can be enabled on providers.

Here's how one can enable it on CRD & Ingress Kubernetes provider:

ingressClass:
  name: traefik
providers:
  kubernetesCRD:
    ingressClass: traefik
  kubernetesIngress:
    ingressClass: traefik

Use HTTP3

By default, it will use a Load balancers with mixed protocols on websecure entrypoint. They are available since v1.20 and in beta as of Kubernetes v1.24. Availability may depend on your Kubernetes provider.

When using TCP and UDP with a single service, you may encounter this issue from Kubernetes. If you want to avoid this issue, you can set ports.websecure.http3.advertisedPort to an other value than 443

ports:
  websecure:
    http3:
      enabled: true

You can also create two Service, one for TCP and one for UDP:

ports:
  websecure:
    http3:
      enabled: true
service:
  single: false

Use PROXY protocol on Digital Ocean

PROXY protocol is a protocol for sending client connection information, such as origin IP addresses and port numbers, to the final backend server, rather than discarding it at the load balancer.

.DOTrustedIPs: &DOTrustedIPs
  - 127.0.0.1/32
  - 10.120.0.0/16

service:
  enabled: true
  type: LoadBalancer
  annotations:
    # This will tell DigitalOcean to enable the proxy protocol.
    service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"
  spec:
    # This is the default and should stay as cluster to keep the DO health checks working.
    externalTrafficPolicy: Cluster

ports:
  web:
    forwardedHeaders:
      trustedIPs: *DOTrustedIPs
    proxyProtocol:
      trustedIPs: *DOTrustedIPs
  websecure:
    forwardedHeaders:
      trustedIPs: *DOTrustedIPs
    proxyProtocol:
      trustedIPs: *DOTrustedIPs

Enable plugin storage

This chart follows common security practices: it runs as non root with a readonly root filesystem. When enabling a plugin which needs storage, you have to add it to the deployment.

Here is a simple example with crowdsec. You may want to replace with your plugin or see complete exemple on crowdsec here.

deployment:
  additionalVolumes:
  - name: plugins
additionalVolumeMounts:
- name: plugins
  mountPath: /plugins-storage
additionalArguments:
- "--experimental.plugins.bouncer.moduleName=github.com/maxlerebourg/crowdsec-bouncer-traefik-plugin"
- "--experimental.plugins.bouncer.version=v1.1.9"

Use Traefik native Let's Encrypt integration, without cert-manager

In Traefik Proxy, ACME certificates are stored in a JSON file.

This file needs to have 0600 permissions, meaning, only the owner of the file has full read and write access to it. By default, Kubernetes recursively changes ownership and permissions for the content of each volume.

=> An initContainer can be used to avoid an issue on this sensitive file. See #396 for more details.

Once the provider is ready, it can be used in an IngressRoute:

apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
  name: [...]
spec:
  entryPoints: [...]
  routes: [...]
  tls:
    certResolver: letsencrypt

Change apiVersion to traefik.containo.us/v1alpha1 for charts prior to v28.0.0

See the list of supported providers for others.

Example with CloudFlare

This example needs a CloudFlare token in a Kubernetes Secret and a working StorageClass.

Step 1: Create Secret with CloudFlare token:

---
apiVersion: v1
kind: Secret
metadata:
  name: cloudflare
type: Opaque
stringData:
  token: {{ SET_A_VALID_TOKEN_HERE }}

Step 2:

persistence:
  enabled: true
  storageClass: xxx
certificatesResolvers:
  letsencrypt:
    acme:
      dnsChallenge:
        provider: cloudflare
      storage: /data/acme.json
env:
  - name: CF_DNS_API_TOKEN
    valueFrom:
      secretKeyRef:
        name: cloudflare
        key: token
deployment:
  initContainers:
    - name: volume-permissions
      image: busybox:latest
      command: ["sh", "-c", "touch /data/acme.json; chmod -v 600 /data/acme.json"]
      volumeMounts:
      - mountPath: /data
        name: data
podSecurityContext:
  fsGroup: 65532
  fsGroupChangePolicy: "OnRootMismatch"

Note

With Traefik Hub, certificates can be stored as a Secret on Kubernetes with distributedAcme resolver.

Provide default certificate with cert-manager and CloudFlare DNS

Setup:

  • cert-manager installed in cert-manager namespace
  • A cloudflare account on a DNS Zone

Step 1: Create Secret and Issuer needed by cert-manager with your API Token. See cert-manager documentation for creating this token with needed rights:

---
apiVersion: v1
kind: Secret
metadata:
  name: cloudflare
  namespace: traefik
type: Opaque
stringData:
  api-token: XXX
---
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: cloudflare
  namespace: traefik
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: email@example.com
    privateKeySecretRef:
      name: cloudflare-key
    solvers:
      - dns01:
          cloudflare:
            apiTokenSecretRef:
              name: cloudflare
              key: api-token

Step 2: Create Certificate in traefik namespace

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: wildcard-example-com
  namespace: traefik
spec:
  secretName: wildcard-example-com-tls
  dnsNames:
    - "example.com"
    - "*.example.com"
  issuerRef:
    name: cloudflare
    kind: Issuer

Step 3: Check that it's ready

kubectl get certificate -n traefik

If needed, logs of cert-manager pod can give you more information

Step 4: Use it on the TLS Store in values.yaml file for this Helm Chart

tlsStore:
  default:
    defaultCertificate:
      secretName: wildcard-example-com-tls

Step 5: Enjoy. All your IngressRoute use this certificate by default now.

They should use websecure entrypoint like this:

apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
  name: example-com-tls
spec:
  entryPoints:
    - websecure
  routes:
  - match: Host(`test.example.com`)
    kind: Rule
    services:
    - name: XXXX
      port: 80

Add custom (internal) services

In some cases you might want to have more than one Traefik service within your cluster, e.g. a default (external) one and a service that is only exposed internally to pods within your cluster.

The service.additionalServices allows you to add an arbitrary amount of services, provided as a name to service details mapping; for example you can use the following values:

service:
  additionalServices:
    internal:
      type: ClusterIP
      labels:
        traefik-service-label: internal

Ports can then be exposed on this service by using the port name to boolean mapping expose on the respective port; e.g. to expose the traefik API port on your internal service so pods within your cluster can use it, you can do:

ports:
  traefik:
    expose:
      # Sensitive data should not be exposed on the internet
      # => Keep this disabled !
      default: false
      internal: true

This will then provide an additional Service manifest, looking like this:

---
# Source: traefik/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: traefik-internal
  namespace: traefik
[...]
spec:
  type: ClusterIP
  selector:
    app.kubernetes.io/name: traefik
    app.kubernetes.io/instance: traefik-traefik
  ports:
  - port: 8080
    name: "traefik"
    targetPort: traefik
    protocol: TCP

Use this Chart as a dependency of your own chart

First, let's create a default Helm Chart, with Traefik as a dependency.

helm create foo
cd foo
echo "
dependencies:
  - name: traefik
    version: "24.0.0"
    repository: "https://traefik.github.io/charts"
" >> Chart.yaml

Second, let's tune some values like enabling HPA:

cat <<-EOF >> values.yaml
traefik:
  autoscaling:
    enabled: true
    maxReplicas: 3
EOF

Third, one can see if it works as expected:

helm dependency update
helm dependency build
helm template . | grep -A 14 -B 3 Horizontal

It should produce this output:

---
# Source: foo/charts/traefik/templates/hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: release-name-traefik
  namespace: flux-system
  labels:
    app.kubernetes.io/name: traefik
    app.kubernetes.io/instance: release-name-flux-system
    helm.sh/chart: traefik-24.0.0
    app.kubernetes.io/managed-by: Helm
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: release-name-traefik
  maxReplicas: 3

Configure TLS

The TLS options allow one to configure some parameters of the TLS connection.

tlsOptions:
  default:
    labels: {}
    sniStrict: true
  custom-options:
    labels: {}
    curvePreferences:
      - CurveP521
      - CurveP384

Use latest build of Traefik v3 from master

An experimental build of Traefik Proxy is available on a specific repository.

It can be used with those values:

image:
  repository: traefik/traefik
  tag: experimental-v3.0

Use Prometheus Operator

An optional support of this operator is included in this Chart. See documentation of this operator for more details.

It can be used with those values:

metrics:
  prometheus:
    service:
      enabled: true
    disableAPICheck: false
    serviceMonitor:
      enabled: true
      metricRelabelings:
        - sourceLabels: [__name__]
          separator: ;
          regex: ^fluentd_output_status_buffer_(oldest|newest)_.+
          replacement: $1
          action: drop
      relabelings:
        - sourceLabels: [__meta_kubernetes_pod_node_name]
          separator: ;
          regex: ^(.*)$
          targetLabel: nodename
          replacement: $1
          action: replace
      jobLabel: traefik
      interval: 30s
      honorLabels: true
    prometheusRule:
      enabled: true
      rules:
        - alert: TraefikDown
          expr: up{job="traefik"} == 0
          for: 5m
          labels:
            context: traefik
            severity: warning
          annotations:
            summary: "Traefik Down"
            description: "{{ $labels.pod }} on {{ $labels.nodename }} is down"

Use kubernetes Gateway API

One can use the new stable kubernetes gateway API provider setting the following values:

providers:
  kubernetesGateway:
    enabled: true
With those values, a whoami service can be exposed with a HTTPRoute
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: whoami
spec:
  replicas: 2
  selector:
    matchLabels:
      app: whoami
  template:
    metadata:
      labels:
        app: whoami
    spec:
      containers:
        - name: whoami
          image: traefik/whoami

---
apiVersion: v1
kind: Service
metadata:
  name: whoami
spec:
  selector:
    app: whoami
  ports:
    - protocol: TCP
      port: 80

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: whoami
spec:
  parentRefs:
    - name: traefik-gateway
  hostnames:
    - whoami.docker.localhost
  rules:
    - matches:
        - path:
            type: Exact
            value: /

      backendRefs:
        - name: whoami
          port: 80
          weight: 1

Once it's applied, whoami should be accessible on http://whoami.docker.localhost/

In this example, Deployment and HTTPRoute should be deployed in the same namespace as the Traefik Gateway: Chart namespace.

Use Kubernetes Gateway API with cert-manager

One can use the new stable kubernetes gateway API provider with automatic TLS certificates delivery (with cert-manager) setting the following values:

providers:
  kubernetesGateway:
    enabled: true
gateway:
  enabled: true
  annotations:
    cert-manager.io/issuer: selfsigned-issuer
  listeners:
    websecure:
      hostname: whoami.docker.localhost
      port: 8443
      protocol: HTTPS
      certificateRefs:
        - name: whoami-tls

Install cert-manager:

helm repo add jetstack https://charts.jetstack.io --force-update
helm upgrade --install \
cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--version v1.15.1 \
--set crds.enabled=true \
--set "extraArgs={--enable-gateway-api}"
With those values, a whoami service can be exposed with HTTPRoute on both HTTP and HTTPS
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: whoami
spec:
  replicas: 2
  selector:
    matchLabels:
      app: whoami
  template:
    metadata:
      labels:
        app: whoami
    spec:
      containers:
        - name: whoami
          image: traefik/whoami

---
apiVersion: v1
kind: Service
metadata:
  name: whoami
spec:
  selector:
    app: whoami
  ports:
    - protocol: TCP
      port: 80

---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: whoami
spec:
  parentRefs:
    - name: traefik-gateway
  hostnames:
    - whoami.docker.localhost
  rules:
    - matches:
        - path:
            type: Exact
            value: /

      backendRefs:
        - name: whoami
          port: 80
          weight: 1

---
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: selfsigned-issuer
spec:
  selfSigned: {}

Once it's applied, whoami should be accessible on https://whoami.docker.localhost/