Hi,
I am having troubles (maybe it is because I don't fully understand k8s/traefik) getting the migration done. I can access the traefik admin interface if I go straight to: http://10.11.6.73:8080/dashboard/#/http/services
but I cannot go anymore through my physical load balancer.
So traffic is going over SSL 443 from the LB to the traefik (443) instance and there based on "Host" it is supposed to redirect the traffic to 8080.
Here are the config files:
Hi,
I basically need traefik to listen 443 for all the traffic and apply IngressRoute that sends traffic to 8080 if matches the Host=k1vip1.domain.com. Meaning that if I type app.domain.com it sends the traffic to traefik for it to route to the appropriate POD, but if I type k1vip1.domain.com it means I want to access the traefik admin interface.
Is this a wrong approach?
Thanks.
It's not a wrong approach per se, It's just when you have a several traefik nodes, each one has it's won dashboard. And what you see on each dashboard could be different depending on your cluster configuration. If LBed you hit a random one, so I was wondering if that's useful.
On the second thought, if your traefik nodes in kubernetes cluster are configured all the same they should pick up the same configuration from ingresses / ingressroutes, so they should be the same, and since they are read-only it should not really matter which one you hit, so that makes sense.
As for you question, here is my take on it.
With exposing dashboard you have two options: api and api.insecure, you are using the latter. I would suggest using the former and then routing incoming requests to api@internal a docker example is here, but you can adapt it to your use case. You will probably need to use file provider in conjunction with kubernetes-ingress one for this, since kubernetes one cannot refer to traefik services.
Alternatively (and I do not recommend this), you can set up routing to 8080 as you are trying to.
So what does not work? You need to determine, where it break downs because there are quite a few steps here (browser > LB > Traefik Ingress > Traefik > Ingress Route >K8S Service > Dashboard) - this is not entirely scientific, since some of these above are just configuration and not run-time, but that would do.
I'm seeing entryPoints: [] on your IngressRoute, and I think this is one of possible problems.
You are also not telling what you are observing, obviously something "not working" but what are you seeing?
Hi @zespri , sorry for the late reply.
I am fine following whatever configuration seems better (I just need traefik working as ingress and the dashboard available) , but I don't understand using two providers, kubernetes like I am using now, and file provider. I am lost at this point.
Thanks.
Hi @titansmc! Thanks your interest and the report.
First, there is an issue in the configuration of the Traefik Deployment: you are providing both CLI arguments to Traefik AND a traefik.toml file through the ConfigMap.
It means that you have to choose between CLI or file. This assumption is not true for the dynamic configuration.
=> If you check the log of any of the Traefik pods, you'll see a line that tells you which configuration is used when loading. So you'll be able to deduce which one is not used
Then, can you start with only 1 replica of the Traefik pod? The goal is to have a first viable and working configuration before going on the road of distributed.
Remove the port "admin | 8080" from the Service named traefik-ingress-lb to avoid exposing the port 8080 of Traefik on the outside. Exposing the port 8080 of the Traefik pod(s) is done with the other service named traefik-web-ui.
To simplify your setup, edit the Service named traefik-web-ui so that both port: and targetPort: directive have the same value 8080 (easier setup: LB (443) -> Traefik (443) -> Traefik itself (8080)). The goal is to remove any other complex setup to reach a working configuration. Don't forget to adapt the port in the section services: of the IngressRoute as well
The goal of Kubernetes Services is to expose the Pods through the network, because Pod's IP can change, or Pods can move. If you use the command line kubectl get svc --namespace= traefik, you can see the Kubernetes Services created within the namespace named traefik.
The definition of these services are found on the YAML manifest files where the attribute kind: has the value Service.
On the initial message of this post, you provided the content of these YAML manifest files you used to install Traefik on your Kubernetes: there are 2 defined Services:
One named traefik-ingress-service, which exposes/loadbalance the private TCP ports 80, 443 and 8080, of every pod having the selector k8s-app set to the value traefik-ingress-lb, to the public ports (respectively) 80, 443 and 8080 of the external IP of this service itself.
This "Kubernetes Service" allows the Ingress Controller to be reachable from the outside. So the port 8080 should not be defined here, as it exposes the port 8080 publically, while you want the dashboard to be available on HTTP/HTTPS on 80/443 with HTTP routing (host, path, auth, etc.).
Please note that, even though you specified 3 "external static IPs" to this service, the default type of Service in Kubernetes is ClusterIP, which means only reachable inside Kubernetes networks. I'm not sure about the behavior, but I would set the type of this service to NodePort and ensure that the published ports are configured correctly on the external Load Balancer. Please check Kubernetes documentation to understand what is going on here.
One named traefik-web-ui which exposes/loadbalance the private TCP port 8080, of every pod having the selector k8s-app set to the value traefik-ingress-lb, to the public port 80. This service is only internal to Kubernetes networks as the default type is ClusterIP.
It is fine to keep this service private, as you want it served by the Ingress.
This "Kubernetes Service" is already mapped to the "Traefik Service" of the IngressRoute. My recommendation is to stay with the port 8080 for the exposed port of the service (because it is currently 80 for you right now).