Traefik v2.2 on k8s + Helm 3 Chart = CrashLoopBackOff [logs don't point to anything]

Could someone chime in here?

Logs:

time="2020-05-29T06:37:34Z" level=info msg="Configuration loaded from file: /etc/traefik/traefik.yaml"
time="2020-05-29T06:37:34Z" level=info msg="Traefik version 2.2.1 built on 2020-04-29T18:02:09Z"
time="2020-05-29T06:37:34Z" level=debug msg="Static configuration loaded {\"global\":{\"checkNewVersion\":true},\"serversTransport\":{\"maxIdleConnsPerHost\":200},\"entryPoints\":{\"traefik\":{\"address\":\":8080\",\"transport\":{\"lifeCycle\":{\"graceTimeOut\":10000000000},\"respondingTimeouts\":{\"idleTimeout\":180000000000}},\"forwardedHeaders\":{},\"http\":{}},\"web\":{\"address\":\":8000\",\"transport\":{\"lifeCycle\":{\"graceTimeOut\":10000000000},\"respondingTimeouts\":{\"idleTimeout\":180000000000}},\"forwardedHeaders\":{},\"http\":{\"redirections\":{\"entryPoint\":{\"to\":\":443\",\"https\":\"https\",\"permanent\":true,\"priority\":2147483647}}}},\"websecure\":{\"address\":\":4443\",\"transport\":{\"lifeCycle\":{\"graceTimeOut\":10000000000},\"respondingTimeouts\":{\"idleTimeout\":180000000000}},\"forwardedHeaders\":{},\"http\":{}}},\"providers\":{\"providersThrottleDuration\":2000000000,\"kubernetesIngress\":{},\"kubernetesCRD\":{}},\"api\":{\"insecure\":true,\"dashboard\":true},\"log\":{\"level\":\"DEBUG\",\"format\":\"common\"}}"
time="2020-05-29T06:37:34Z" level=info msg="\nStats collection is disabled.\nHelp us improve Traefik by turning this feature on :)\nMore details on: https://docs.traefik.io/contributing/data-collection/\n"
time="2020-05-29T06:37:34Z" level=debug msg="Start TCP Server" entryPointName=web
time="2020-05-29T06:37:34Z" level=info msg="Starting provider aggregator.ProviderAggregator {}"
time="2020-05-29T06:37:34Z" level=debug msg="Start TCP Server" entryPointName=websecure
time="2020-05-29T06:37:34Z" level=debug msg="Start TCP Server" entryPointName=traefik
time="2020-05-29T06:37:34Z" level=info msg="Starting provider *crd.Provider {}"
time="2020-05-29T06:37:34Z" level=debug msg="Using label selector: \"\"" providerName=kubernetescrd
time="2020-05-29T06:37:34Z" level=info msg="label selector is: \"\"" providerName=kubernetescrd
time="2020-05-29T06:37:34Z" level=info msg="Creating in-cluster Provider client" providerName=kubernetescrd
time="2020-05-29T06:37:34Z" level=info msg="Starting provider *ingress.Provider {}"
time="2020-05-29T06:37:34Z" level=debug msg="Using Ingress label selector: \"\"" providerName=kubernetes
time="2020-05-29T06:37:34Z" level=info msg="ingress label selector is: \"\"" providerName=kubernetes
time="2020-05-29T06:37:34Z" level=info msg="Creating in-cluster Provider client" providerName=kubernetes
time="2020-05-29T06:37:34Z" level=info msg="Starting provider *traefik.Provider {}"
time="2020-05-29T06:37:34Z" level=debug msg="Configuration received from provider internal: {\"http\":{\"routers\":{\"api\":{\"entryPoints\":[\"traefik\"],\"service\":\"api@internal\",\"rule\":\"PathPrefix(`/api`)\",\"priority\":2147483646},\"dashboard\":{\"entryPoints\":[\"traefik\"],\"middlewares\":[\"dashboard_redirect@internal\",\"dashboard_stripprefix@internal\"],\"service\":\"dashboard@internal\",\"rule\":\"PathPrefix(`/`)\",\"priority\":2147483645},\"web-to-443\":{\"entryPoints\":[\"web\"],\"middlewares\":[\"redirect-web-to-443\"],\"service\":\"noop@internal\",\"rule\":\"HostRegexp(`{host:.+}`)\",\"priority\":2147483647}},\"services\":{\"api\":{},\"dashboard\":{},\"noop\":{}},\"middlewares\":{\"dashboard_redirect\":{\"redirectRegex\":{\"regex\":\"^(http:\\\\/\\\\/[^:\\\\/]+(:\\\\d+)?)\\\\/$\",\"replacement\":\"${1}/dashboard/\",\"permanent\":true}},\"dashboard_stripprefix\":{\"stripPrefix\":{\"prefixes\":[\"/dashboard/\",\"/dashboard\"]}},\"redirect-web-to-443\":{\"redirectScheme\":{\"scheme\":\"https\",\"port\":\"443\",\"permanent\":true}}}},\"tcp\":{},\"tls\":{}}" providerName=internal
time="2020-05-29T06:37:34Z" level=debug msg="Added outgoing tracing middleware api@internal" middlewareType=TracingForwarder entryPointName=traefik routerName=api@internal middlewareName=tracing
time="2020-05-29T06:37:34Z" level=debug msg="Added outgoing tracing middleware dashboard@internal" middlewareName=tracing middlewareType=TracingForwarder entryPointName=traefik routerName=dashboard@internal
time="2020-05-29T06:37:34Z" level=debug msg="Creating middleware" entryPointName=traefik routerName=dashboard@internal middlewareName=dashboard_stripprefix@internal middlewareType=StripPrefix
time="2020-05-29T06:37:34Z" level=debug msg="Adding tracing to middleware" entryPointName=traefik routerName=dashboard@internal middlewareName=dashboard_stripprefix@internal
time="2020-05-29T06:37:34Z" level=debug msg="Creating middleware" entryPointName=traefik routerName=dashboard@internal middlewareName=dashboard_redirect@internal middlewareType=RedirectRegex
time="2020-05-29T06:37:34Z" level=debug msg="Setting up redirection from ^(http:\\/\\/[^:\\/]+(:\\d+)?)\\/$ to ${1}/dashboard/" entryPointName=traefik routerName=dashboard@internal middlewareName=dashboard_redirect@internal middlewareType=RedirectRegex
time="2020-05-29T06:37:34Z" level=debug msg="Adding tracing to middleware" entryPointName=traefik routerName=dashboard@internal middlewareName=dashboard_redirect@internal
time="2020-05-29T06:37:34Z" level=debug msg="Creating middleware" entryPointName=traefik middlewareName=traefik-internal-recovery middlewareType=Recovery
time="2020-05-29T06:37:34Z" level=debug msg="Added outgoing tracing middleware noop@internal" routerName=web-to-443@internal entryPointName=web middlewareName=tracing middlewareType=TracingForwarder
time="2020-05-29T06:37:34Z" level=debug msg="Creating middleware" entryPointName=web routerName=web-to-443@internal middlewareName=redirect-web-to-443@internal middlewareType=RedirectScheme
time="2020-05-29T06:37:34Z" level=debug msg="Setting up redirection to https 443" entryPointName=web routerName=web-to-443@internal middlewareName=redirect-web-to-443@internal middlewareType=RedirectScheme
time="2020-05-29T06:37:34Z" level=debug msg="Adding tracing to middleware" entryPointName=web routerName=web-to-443@internal middlewareName=redirect-web-to-443@internal
time="2020-05-29T06:37:34Z" level=debug msg="Creating middleware" middlewareType=Recovery entryPointName=web middlewareName=traefik-internal-recovery
time="2020-05-29T06:37:34Z" level=debug msg="No default certificate, generating one"
time="2020-05-29T06:37:34Z" level=debug msg="Configuration received from provider kubernetes: {\"http\":{},\"tcp\":{}}" providerName=kubernetes
time="2020-05-29T06:37:34Z" level=debug msg="Configuration received from provider kubernetescrd: {\"http\":{},\"tcp\":{},\"udp\":{},\"tls\":{}}" providerName=kubernetescrd
time="2020-05-29T06:37:34Z" level=debug msg="Added outgoing tracing middleware noop@internal" entryPointName=web routerName=web-to-443@internal middlewareName=tracing middlewareType=TracingForwarder
time="2020-05-29T06:37:34Z" level=debug msg="Creating middleware" middlewareType=RedirectScheme routerName=web-to-443@internal entryPointName=web middlewareName=redirect-web-to-443@internal
time="2020-05-29T06:37:34Z" level=debug msg="Setting up redirection to https 443" entryPointName=web middlewareName=redirect-web-to-443@internal middlewareType=RedirectScheme routerName=web-to-443@internal
time="2020-05-29T06:37:34Z" level=debug msg="Adding tracing to middleware" routerName=web-to-443@internal entryPointName=web middlewareName=redirect-web-to-443@internal
time="2020-05-29T06:37:34Z" level=debug msg="Creating middleware" middlewareType=Recovery entryPointName=web middlewareName=traefik-internal-recovery
time="2020-05-29T06:37:34Z" level=debug msg="Added outgoing tracing middleware api@internal" entryPointName=traefik routerName=api@internal middlewareName=tracing middlewareType=TracingForwarder
time="2020-05-29T06:37:34Z" level=debug msg="Added outgoing tracing middleware dashboard@internal" entryPointName=traefik routerName=dashboard@internal middlewareType=TracingForwarder middlewareName=tracing
time="2020-05-29T06:37:34Z" level=debug msg="Creating middleware" middlewareType=StripPrefix entryPointName=traefik routerName=dashboard@internal middlewareName=dashboard_stripprefix@internal
time="2020-05-29T06:37:34Z" level=debug msg="Adding tracing to middleware" routerName=dashboard@internal entryPointName=traefik middlewareName=dashboard_stripprefix@internal
time="2020-05-29T06:37:34Z" level=debug msg="Creating middleware" entryPointName=traefik routerName=dashboard@internal middlewareName=dashboard_redirect@internal middlewareType=RedirectRegex
time="2020-05-29T06:37:34Z" level=debug msg="Setting up redirection from ^(http:\\/\\/[^:\\/]+(:\\d+)?)\\/$ to ${1}/dashboard/" entryPointName=traefik routerName=dashboard@internal middlewareName=dashboard_redirect@internal middlewareType=RedirectRegex
time="2020-05-29T06:37:34Z" level=debug msg="Adding tracing to middleware" entryPointName=traefik routerName=dashboard@internal middlewareName=dashboard_redirect@internal
time="2020-05-29T06:37:34Z" level=debug msg="Creating middleware" middlewareType=Recovery middlewareName=traefik-internal-recovery entryPointName=traefik
time="2020-05-29T06:37:34Z" level=debug msg="No default certificate, generating one"
time="2020-05-29T06:37:35Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:37:35Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:35Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:37:35Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:35Z" level=debug msg="Added outgoing tracing middleware api@internal" entryPointName=traefik routerName=api@internal middlewareName=tracing middlewareType=TracingForwarder
time="2020-05-29T06:37:35Z" level=debug msg="Added outgoing tracing middleware dashboard@internal" entryPointName=traefik routerName=dashboard@internal middlewareName=tracing middlewareType=TracingForwarder
time="2020-05-29T06:37:35Z" level=debug msg="Creating middleware" entryPointName=traefik routerName=dashboard@internal middlewareName=dashboard_stripprefix@internal middlewareType=StripPrefix
time="2020-05-29T06:37:35Z" level=debug msg="Adding tracing to middleware" entryPointName=traefik routerName=dashboard@internal middlewareName=dashboard_stripprefix@internal
time="2020-05-29T06:37:35Z" level=debug msg="Creating middleware" middlewareName=dashboard_redirect@internal middlewareType=RedirectRegex entryPointName=traefik routerName=dashboard@internal
time="2020-05-29T06:37:35Z" level=debug msg="Setting up redirection from ^(http:\\/\\/[^:\\/]+(:\\d+)?)\\/$ to ${1}/dashboard/" middlewareType=RedirectRegex entryPointName=traefik routerName=dashboard@internal middlewareName=dashboard_redirect@internal
time="2020-05-29T06:37:35Z" level=debug msg="Adding tracing to middleware" entryPointName=traefik routerName=dashboard@internal middlewareName=dashboard_redirect@internal
time="2020-05-29T06:37:35Z" level=debug msg="Creating middleware" entryPointName=traefik middlewareName=traefik-internal-recovery middlewareType=Recovery
time="2020-05-29T06:37:35Z" level=debug msg="Added outgoing tracing middleware noop@internal" middlewareType=TracingForwarder entryPointName=web routerName=web-to-443@internal middlewareName=tracing
time="2020-05-29T06:37:35Z" level=debug msg="Creating middleware" routerName=web-to-443@internal middlewareName=redirect-web-to-443@internal middlewareType=RedirectScheme entryPointName=web
time="2020-05-29T06:37:35Z" level=debug msg="Setting up redirection to https 443" entryPointName=web routerName=web-to-443@internal middlewareName=redirect-web-to-443@internal middlewareType=RedirectScheme
time="2020-05-29T06:37:35Z" level=debug msg="Adding tracing to middleware" routerName=web-to-443@internal middlewareName=redirect-web-to-443@internal entryPointName=web
time="2020-05-29T06:37:35Z" level=debug msg="Creating middleware" middlewareName=traefik-internal-recovery middlewareType=Recovery entryPointName=web
time="2020-05-29T06:37:35Z" level=debug msg="No default certificate, generating one"
time="2020-05-29T06:37:35Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:37:35Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:36Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:36Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:37:37Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:37:37Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:37Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:37:37Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:38Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:37:38Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:38Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:37:38Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:39Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:39Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:37:39Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:37:39Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:40Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:37:40Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:37:40Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:40Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:41Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:41Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:37:41Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:41Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:37:42Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:37:42Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:42Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:37:42Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:43Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:37:43Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:43Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:43Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:37:44Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:37:44Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:44Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:37:44Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:45Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:37:45Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:45Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:37:45Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:46Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:37:46Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:37:46Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:46Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:47Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:37:47Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:47Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:37:47Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:48Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:37:48Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:48Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:48Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:37:49Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:37:49Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:49Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:37:49Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:50Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:37:50Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:50Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:50Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:37:51Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:37:51Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:51Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:37:51Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:52Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:37:52Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:37:52Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:52Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:53Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:37:53Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:53Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:37:53Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:54Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:37:54Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:54Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:37:54Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:55Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:37:55Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:55Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:37:55Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:56Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:56Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:37:56Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:37:56Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:57Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:37:57Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:57Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:57Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:37:58Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:37:58Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:58Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:37:58Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:59Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:37:59Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:59Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:37:59Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:38:00Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:38:00Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:38:00Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:38:00Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:38:01Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:38:01Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:38:01Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:38:01Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:38:02Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:38:02Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:38:02Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:38:02Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:38:03Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:38:03Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:38:03Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:38:03Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:38:04Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:38:04Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:38:04Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:38:04Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:38:05Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:38:05Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:38:05Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:38:05Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:38:06Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:38:06Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:38:06Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:38:06Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:38:07Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:38:07Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:38:07Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:38:07Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:38:08Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:38:08Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:38:08Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:38:08Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:38:09Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:38:09Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:38:09Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:38:09Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:38:10Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:38:10Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:38:10Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:38:10Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:38:11Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:38:11Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:38:11Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:38:11Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:38:12Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:38:12Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:38:12Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:38:12Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:38:13Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:38:13Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:38:13Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetescrd
time="2020-05-29T06:38:13Z" level=debug msg="Skipping Kubernetes event kind *v1.Endpoints" providerName=kubernetes
time="2020-05-29T06:38:13Z" level=info msg="I have to go..."
time="2020-05-29T06:38:13Z" level=info msg="Stopping server gracefully"
time="2020-05-29T06:38:13Z" level=debug msg="Waiting 10s seconds before killing connections." entryPointName=traefik
time="2020-05-29T06:38:13Z" level=debug msg="Waiting 10s seconds before killing connections." entryPointName=web
time="2020-05-29T06:38:13Z" level=debug msg="Waiting 10s seconds before killing connections." entryPointName=websecure
time="2020-05-29T06:38:13Z" level=error msg="accept tcp [::]:8080: use of closed network connection" entryPointName=traefik
time="2020-05-29T06:38:13Z" level=error msg="accept tcp [::]:4443: use of closed network connection" entryPointName=websecure
time="2020-05-29T06:38:13Z" level=error msg="close tcp [::]:8000: use of closed network connection" entryPointName=web
time="2020-05-29T06:38:13Z" level=debug msg="Entry point web closed" entryPointName=web
time="2020-05-29T06:38:13Z" level=error msg="close tcp [::]:4443: use of closed network connection" entryPointName=websecure
time="2020-05-29T06:38:13Z" level=debug msg="Entry point websecure closed" entryPointName=websecure
time="2020-05-29T06:38:13Z" level=error msg="accept tcp [::]:8000: use of closed network connection" entryPointName=web
time="2020-05-29T06:38:13Z" level=error msg="close tcp [::]:8080: use of closed network connection" entryPointName=traefik
time="2020-05-29T06:38:13Z" level=debug msg="Entry point traefik closed" entryPointName=traefik
time="2020-05-29T06:38:13Z" level=info msg="Server stopped"
time="2020-05-29T06:38:13Z" level=info msg="Shutting down"

Helm values.yaml:

image:
  name: traefik
  tag: 2.2.1

ingressRoute:
  dashboard:
    enabled: false

volumes:
  - name: traefik-conf
    mountPath: "/etc/traefik"
    type: configMap

service:
  enabled: false

ports:
  web:
    port: 8000
    hostPort: 80
    expose: false
  websecure:
    port: 8443
    hostPort: 443
    expose: false

globalArguments: []

Traefik config:

apiVersion: v1
kind: ConfigMap
metadata:
  name: traefik-conf
  namespace: kube-system
data:
  traefik.yaml: |
    entryPoints:
      web:
        address: ":8000"
        http:
          redirections:
            entryPoint:
              to: ":443"
      websecure:
        address: ":4443"
    api:
      dashboard: true
      insecure: true
    providers:
      kubernetesCRD: {}
      kubernetesIngress: {}
    log:
      level: DEBUG

Kubernetes describe pod

Name:         traefik-67c64cff7c-xrwsl
Namespace:    kube-system
Priority:     0
Node:         faust/<IP>
Start Time:   Fri, 29 May 2020 00:52:38 -0400
Labels:       app.kubernetes.io/instance=traefik
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=traefik
              helm.sh/chart=traefik-8.2.1
              pod-template-hash=67c64cff7c
Annotations:  cni.projectcalico.org/podIP: 172.16.160.76/32
              cni.projectcalico.org/podIPs: 172.16.160.76/32
Status:       Running
IP:           172.16.160.76
IPs:
  IP:           172.16.160.76
Controlled By:  ReplicaSet/traefik-67c64cff7c
Containers:
  traefik:
    Container ID:  docker://d976f63b60e1b8141a8c5bf02fef41d3f58a68444fb68dc58d7c1989eec9ecf6
    Image:         traefik:2.2.1
    Image ID:      docker-pullable://traefik@sha256:ad4442a6f88cf35266542588f13ae9984aa058a55a518a87876e48c160d19ee0
    Ports:         9000/TCP, 8000/TCP, 8443/TCP
    Host Ports:    0/TCP, 80/TCP, 443/TCP
    Args:
      --entryPoints.traefik.address=:9000
      --entryPoints.web.address=:8000
      --entryPoints.websecure.address=:8443
      --api.dashboard=true
      --ping=true
      --providers.kubernetescrd
      --providers.kubernetesingress
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Fri, 29 May 2020 02:43:54 -0400
      Finished:     Fri, 29 May 2020 02:44:33 -0400
    Ready:          False
    Restart Count:  39
    Liveness:       http-get http://:9000/ping delay=10s timeout=2s period=10s #success=1 #failure=3
    Readiness:      http-get http://:9000/ping delay=10s timeout=2s period=10s #success=1 #failure=1
    Environment:    <none>
    Mounts:
      /data from data (rw)
      /etc/traefik from traefik-conf (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from traefik-token-2mqdz (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  data:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  traefik-conf:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      traefik-conf
    Optional:  false
  traefik-token-2mqdz:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  traefik-token-2mqdz
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason     Age                     From            Message
  ----     ------     ----                    ----            -------
  Warning  Unhealthy  8m49s (x111 over 113m)  kubelet, faust  Readiness probe failed: Get http://172.16.160.76:9000/ping: dial tcp 172.16.160.76:9000: connect: connection refused
  Warning  BackOff    3m50s (x412 over 110m)  kubelet, faust  Back-off restarting failed container

Kubernetes pod:

apiVersion: v1
kind: Pod
metadata:
  annotations:
    cni.projectcalico.org/podIP: 172.16.160.76/32
    cni.projectcalico.org/podIPs: 172.16.160.76/32
  creationTimestamp: "2020-05-29T04:52:38Z"
  generateName: traefik-67c64cff7c-
  labels:
    app.kubernetes.io/instance: traefik
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: traefik
    helm.sh/chart: traefik-8.2.1
    pod-template-hash: 67c64cff7c
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:generateName: {}
        f:labels:
          .: {}
          f:app.kubernetes.io/instance: {}
          f:app.kubernetes.io/managed-by: {}
          f:app.kubernetes.io/name: {}
          f:helm.sh/chart: {}
          f:pod-template-hash: {}
        f:ownerReferences:
          .: {}
          k:{"uid":"7ad5d831-8aa9-4b15-8830-a5391583f0c7"}:
            .: {}
            f:apiVersion: {}
            f:blockOwnerDeletion: {}
            f:controller: {}
            f:kind: {}
            f:name: {}
            f:uid: {}
      f:spec:
        f:containers:
          k:{"name":"traefik"}:
            .: {}
            f:args: {}
            f:image: {}
            f:imagePullPolicy: {}
            f:livenessProbe:
              .: {}
              f:failureThreshold: {}
              f:httpGet:
                .: {}
                f:path: {}
                f:port: {}
                f:scheme: {}
              f:initialDelaySeconds: {}
              f:periodSeconds: {}
              f:successThreshold: {}
              f:timeoutSeconds: {}
            f:name: {}
            f:ports:
              .: {}
              k:{"containerPort":8000,"protocol":"TCP"}:
                .: {}
                f:containerPort: {}
                f:hostPort: {}
                f:name: {}
                f:protocol: {}
              k:{"containerPort":8443,"protocol":"TCP"}:
                .: {}
                f:containerPort: {}
                f:hostPort: {}
                f:name: {}
                f:protocol: {}
              k:{"containerPort":9000,"protocol":"TCP"}:
                .: {}
                f:containerPort: {}
                f:name: {}
                f:protocol: {}
            f:readinessProbe:
              .: {}
              f:failureThreshold: {}
              f:httpGet:
                .: {}
                f:path: {}
                f:port: {}
                f:scheme: {}
              f:initialDelaySeconds: {}
              f:periodSeconds: {}
              f:successThreshold: {}
              f:timeoutSeconds: {}
            f:resources: {}
            f:securityContext:
              .: {}
              f:capabilities:
                .: {}
                f:drop: {}
              f:readOnlyRootFilesystem: {}
              f:runAsGroup: {}
              f:runAsNonRoot: {}
              f:runAsUser: {}
            f:terminationMessagePath: {}
            f:terminationMessagePolicy: {}
            f:volumeMounts:
              .: {}
              k:{"mountPath":"/data"}:
                .: {}
                f:mountPath: {}
                f:name: {}
              k:{"mountPath":"/etc/traefik"}:
                .: {}
                f:mountPath: {}
                f:name: {}
                f:readOnly: {}
        f:dnsPolicy: {}
        f:enableServiceLinks: {}
        f:restartPolicy: {}
        f:schedulerName: {}
        f:securityContext:
          .: {}
          f:fsGroup: {}
        f:serviceAccount: {}
        f:serviceAccountName: {}
        f:terminationGracePeriodSeconds: {}
        f:volumes:
          .: {}
          k:{"name":"data"}:
            .: {}
            f:emptyDir: {}
            f:name: {}
          k:{"name":"traefik-conf"}:
            .: {}
            f:configMap:
              .: {}
              f:defaultMode: {}
              f:name: {}
            f:name: {}
    manager: kube-controller-manager
    operation: Update
    time: "2020-05-29T04:52:38Z"
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:cni.projectcalico.org/podIP: {}
          f:cni.projectcalico.org/podIPs: {}
    manager: calico
    operation: Update
    time: "2020-05-29T04:52:40Z"
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:status:
        f:conditions:
          k:{"type":"ContainersReady"}:
            .: {}
            f:lastProbeTime: {}
            f:lastTransitionTime: {}
            f:message: {}
            f:reason: {}
            f:status: {}
            f:type: {}
          k:{"type":"Initialized"}:
            .: {}
            f:lastProbeTime: {}
            f:lastTransitionTime: {}
            f:status: {}
            f:type: {}
          k:{"type":"Ready"}:
            .: {}
            f:lastProbeTime: {}
            f:lastTransitionTime: {}
            f:message: {}
            f:reason: {}
            f:status: {}
            f:type: {}
        f:containerStatuses: {}
        f:hostIP: {}
        f:phase: {}
        f:podIP: {}
        f:podIPs:
          .: {}
          k:{"ip":"172.16.160.76"}:
            .: {}
            f:ip: {}
        f:startTime: {}
    manager: kubelet
    operation: Update
    time: "2020-05-29T06:44:34Z"
  name: traefik-67c64cff7c-xrwsl
  namespace: kube-system
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: traefik-67c64cff7c
    uid: 7ad5d831-8aa9-4b15-8830-a5391583f0c7
  resourceVersion: "4418230"
  selfLink: /api/v1/namespaces/kube-system/pods/traefik-67c64cff7c-xrwsl
  uid: 13a6b50e-6860-42ba-8ee4-aa5f8a8a3cae
spec:
  containers:
  - args:
    - --entryPoints.traefik.address=:9000
    - --entryPoints.web.address=:8000
    - --entryPoints.websecure.address=:8443
    - --api.dashboard=true
    - --ping=true
    - --providers.kubernetescrd
    - --providers.kubernetesingress
    image: traefik:2.2.1
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 3
      httpGet:
        path: /ping
        port: 9000
        scheme: HTTP
      initialDelaySeconds: 10
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 2
    name: traefik
    ports:
    - containerPort: 9000
      name: traefik
      protocol: TCP
    - containerPort: 8000
      hostPort: 80
      name: web
      protocol: TCP
    - containerPort: 8443
      hostPort: 443
      name: websecure
      protocol: TCP
    readinessProbe:
      failureThreshold: 1
      httpGet:
        path: /ping
        port: 9000
        scheme: HTTP
      initialDelaySeconds: 10
      periodSeconds: 10
      successThreshold: 1
      timeoutSeconds: 2
    resources: {}
    securityContext:
      capabilities:
        drop:
        - ALL
      readOnlyRootFilesystem: true
      runAsGroup: 65532
      runAsNonRoot: true
      runAsUser: 65532
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /data
      name: data
    - mountPath: /etc/traefik
      name: traefik-conf
      readOnly: true
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: traefik-token-2mqdz
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  nodeName: faust
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext:
    fsGroup: 65532
  serviceAccount: traefik
  serviceAccountName: traefik
  terminationGracePeriodSeconds: 60
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - emptyDir: {}
    name: data
  - configMap:
      defaultMode: 420
      name: traefik-conf
    name: traefik-conf
  - name: traefik-token-2mqdz
    secret:
      defaultMode: 420
      secretName: traefik-token-2mqdz
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2020-05-29T04:52:38Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2020-05-29T04:52:38Z"
    message: 'containers with unready status: [traefik]'
    reason: ContainersNotReady
    status: "False"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2020-05-29T04:52:38Z"
    message: 'containers with unready status: [traefik]'
    reason: ContainersNotReady
    status: "False"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2020-05-29T04:52:38Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://d976f63b60e1b8141a8c5bf02fef41d3f58a68444fb68dc58d7c1989eec9ecf6
    image: traefik:2.2.1
    imageID: docker-pullable://traefik@sha256:ad4442a6f88cf35266542588f13ae9984aa058a55a518a87876e48c160d19ee0
    lastState:
      terminated:
        containerID: docker://d976f63b60e1b8141a8c5bf02fef41d3f58a68444fb68dc58d7c1989eec9ecf6
        exitCode: 0
        finishedAt: "2020-05-29T06:44:33Z"
        reason: Completed
        startedAt: "2020-05-29T06:43:54Z"
    name: traefik
    ready: false
    restartCount: 39
    started: false
    state:
      waiting:
        message: back-off 5m0s restarting failed container=traefik pod=traefik-67c64cff7c-xrwsl_kube-system(13a6b50e-6860-42ba-8ee4-aa5f8a8a3cae)
        reason: CrashLoopBackOff
  hostIP: <IP>
  phase: Running
  podIP: 172.16.160.76
  podIPs:
  - ip: 172.16.160.76
  qosClass: BestEffort
  startTime: "2020-05-29T04:52:38Z"

This is so strange. I switched over from a mounted config file to just passing in additional args and the problem was fixed...

New Helm values.yaml:

image:
  name: traefik
  tag: 2.2.1

# ingressRoute:
#  dashboard:
#    enabled: false

# volumes:
#  - name: traefik-conf
#    mountPath: "/etc/traefik"
#    type: configMap

service:
  enabled: false

ports:
  web:
    port: 8000
    hostPort: 80
    expose: false
  websecure:
    port: 8443
    hostPort: 443
    expose: false

additionalArguments:
  - "--entryPoints.web.address=:8000"
  - "--entryPoints.websecure.address=:8443"
  - "--entryPoints.web.http.redirections.entryPoint.to=:443"
  - "--api.dashboard"
  - "--api.insecure"
  - "--providers.kubernetesingress"
  - "--providers.kubernetescrd"
  - "--log.level=DEBUG"

globalArguments: []