Understanding K3d Ingress

Rob Mengert
9 min readAug 29, 2023
K3d Logo

K3d is a dockerized distribution of Rancher’s K3s. It’s incredibly useful for local development against Kubernetes and allows multiple clusters to live on the same host machine. All that is required to use K3d is Docker and kubectl. After that, the K3d installation is super easy.

This post will walk through various ways to get traffic in and out of a K3d cluster for testing purposes with a focus on ingress.

MacOS-isms

Docker Desktop for Mac does not run the docker daemon natively on MacOS. It instead installs a Linux VM and runs Docker there. While there are pros and cons with that approach, one of the cons is that there is no docker0 bridge interface on the host operating system. A side effect is that containers cannot be reached by IP from the host operating system.

A K3d cluster called test has been spun up on my M1 Mac, and here are the IP addresses assigned to the Docker containers that make up the K3d cluster

[rmengert@Robs-MBP:~]
$ k3d cluster create test
<< OMITTED CLUSTER CREATION OUTPUT FOR CLARITY >>

[rmengert@Robs-MBP:~]
$ docker inspect -f '{{.Name}} - {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $(docker ps -q) | grep test
/k3d-test-tools - 172.27.0.2
/k3d-test-serverlb - 172.27.0.4
/k3d-test-server-0 - 172.27.0.3

However, these addresses cannot be reached from the host OS:

[rmengert@Robs-MBP:~]
$ ping -c 2 172.27.0.3
PING 172.27.0.3 (172.27.0.3): 56 data bytes
Request timeout for icmp_seq 0

--- 172.27.0.3 ping statistics ---
2 packets transmitted, 0 packets received, 100.0% packet loss

Why does this matter for K3d? Because by default, K3d will spin up a Traefik ingress controller bound to one of the worker node IP addresses:

[rmengert@Robs-MBP:~]
$ kubectl get svc -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 9m12s
kube-system kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 9m9s
kube-system metrics-server ClusterIP 10.43.227.114 <none> 443/TCP 9m8s
kube-system traefik LoadBalancer 10.43.233.145 172.27.0.3 80:32114/TCP,443:30997/TCP 8m23s

In the above output, the external IP for the traefik service matches the IP for k3d-test-server-0 (172.27.0.3). But since the host can’t reach that address, something else needs to be done in order to interact with services that are spun up within this cluster.

How to Map Traffic into The Cluster — Mac

Let’s get a workload running and get some traffic flowing. There are some resources defined in a git repo, let’s clone it down and apply them.

$ git clone git@github.com:TheFutonEng/k3d-article.git && cd k3d-article
<< OMITTED >>

$ kubectl apply -f manifests/
ingressroute.traefik.containo.us/nginx-ingressroute created
pod/nginx-pod created
service/nginx-service created

Note that the above kubectl apply command created three objects, including a Traefik ingress route. The important thing to note here is that when this cluster was created, no host ports were mapped into the cluster. This means that the only way to interact with this service via a browser on the Mac is through a kubectl port-forward and forwarding the service that was created above.

$ kubectl port-forward svc/nginx-service 8080:80
Forwarding from [::1]:8080 -> 80

With this port-forward running, the nginx service can be reached on port 8080:

Browser session to http://localhost:8080 showing the default Nginx home page.

While this is great, the ingress route isn’t being used in this setup and it can’t be unless some additional parameters are supplied at cluster creation time so let’s spin up another one:

$ k3d cluster create test-ingress -p "8082:80@loadbalancer" --agents 2

The above command is creating another K3d cluster and mapping port 8082 on the host to port 80 on the containers that have a nodefilter of loadbalancer.

Note that if you try to use a port that has already been mapped into a cluster, K3d will throw an error and tear the cluster down.

Let’s spin up the same workload in this new cluster:

$ kubectl apply -f manifests/
ingressroute.traefik.containo.us/nginx-ingressroute created
pod/nginx-pod created
service/nginx-service created

Mapping in a host port at cluster creation time means that a kubectl port-forward is not required. However, the /etc/hosts file must be updated to bind the name from the ingress route to the localhost address (the name is nginx.local in the ingress.yaml file).

$ echo "127.0.0.1 nginx.local" | sudo tee -a /etc/hosts
Password:
127.0.0.1 nginx.local

With this in place, nginx.local will resolve to 127.0.0.1 targeting the local Mac. Now we can browse to http://nginx.local:8082 :

Browser session to http://localhost:8082 showing the default Nginx home page.

How to Map Traffic into The Cluster — Linux

Because Docker runs as a native process when deployed on Linux, if a service type of NodePort is used, no kubectl port-forward shenanigans are ever needed. If we spin up a cluster without binding ports, we can still reach the workload by hitting the docker containers. Spin up a cluster, deploy the manifests, and let’s test:

[rmengert@linux:~/projects/k3d-article]
$ k3d cluster create test

[rmengert@linux:~/projects/k3d-article]
$ docker inspect -f '{{.Name}} - {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $(docker ps -q) | grep test
/k3d-test-serverlb - 172.25.0.3
/k3d-test-server-0 - 172.25.0.2

[rmengert@linux:~/projects/k3d-article]
$ kubectl apply -f manifests/
ingressroute.traefik.containo.us/nginx-ingressroute created
pod/nginx-pod created
service/nginx-service created

[rmengert@linux:~/projects/k3d-article]
$ kubectl apply -f node_port_svc/
service/nginx-service configured

You may have noticed that the K3d cluster spun up on the Mac had three containers, whereas the cluster on Linux only has two. The extra container is a tools container that provides a convenient environment for building and pushing Docker images directly to the K3d cluster. As previously mentioned, since Docker on Mac (and Windows) runs in a virtual machine, it can be cumbersome to build images on the host and then make them available to the k3d cluster. The tools container makes this process easier by providing a Docker daemon inside the cluster. Linux doesn’t require this container since Docker runs natively on the host; there’s no need to have another daemon running.

The last command run reconfigures the ClusterIP service to a NodePort service.

[rmengert@linux:~/projects/k3d-article]
$ kubectl get svc nginx-service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-service NodePort 10.43.35.10 <none> 80:32614/TCP 14m

In this case, a NodePort of 32614 was assigned. If we target this against the server-0 node, we should see the Nginx default page. I don’t have a UI available on this Linux machine so curl will be used:

[rmengert@linux:~/projects/k3d-article]
$ curl http://172.25.0.2:32614
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Groovy. While this is great, exposing the Traefik ingress controller would still be preferable because then the workload could be reached by any host on my network. The network that these K3d nodes are in is a network created by Docker, which is only reachable from the Linux machine. Let’s spin up another cluster and map a port to the loadbalancer nodefilter as was done on the Mac.

[rmengert@oakridge:~/projects/k3d-article]
$ k3d cluster create test-ingress -p "8082:80@loadbalancer" --agents 2

Port 8082 on the Linux machine is being mapped to port 80 on the loadbalancer nodes.

Spin up the workload again:

[rmengert@linux:~/projects/k3d-article]
$ kubectl apply -f manifests/
ingressroute.traefik.containo.us/nginx-ingressroute created
pod/nginx-pod created
service/nginx-service created

But now we’re going to access the workload from the Mac via a browser. We need the IP address of the Linux host first:

[rmengert@oakridge:~/projects/k3d-article]
$ ip ad show br0 | grep inet
inet 192.168.1.70/24 brd 192.168.1.255 scope global dynamic noprefixroute br0
inet6 fe80::3475:98ff:fe9e:4fdf/64 scope link

As shown above, the IP address of the Linux host is 192.168.1.70, which needs to be put into /etc/hosts again:

[rmengert@Robs-MBP:~]
$ sudo sed -i'' -e 's/127\.0\.0\.1 nginx\.local/192\.168\.1\.70 nginx.local/' /etc/hosts
Password:
[rmengert@Robs-MBP:~]

The above command assumes that the 127.0.0.1 nginx.local line is still present in the /etc/hosts file. I’ll confirm that the name is resolving correctly by pinging it:

[rmengert@Robs-MBP:~]
$ ping -c 2 nginx.local
PING nginx.local (192.168.1.70): 56 data bytes
64 bytes from 192.168.1.70: icmp_seq=0 ttl=64 time=117.063 ms
64 bytes from 192.168.1.70: icmp_seq=1 ttl=64 time=0.625 ms

--- nginx.local ping statistics ---
2 packets transmitted, 2 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.625/58.844/117.063/58.219 ms

Cool, we can test reaching out to the workload by browsing to http://nginx.local:8082 again:

Browser session to http://localhost:8082 showing the default Nginx home page.

Sweet! This means that workloads can be run on a Linux machine and interacted with from a Mac or anything else on the local network with a GUI.

How to Use a Different Ingress Controller

Traefik is great but let's say that you need to test traffic through a different ingress controller. What can you do? First, you need to spin up a cluster without Traefik.

For this example, I’m going to use a Linux machine and some files from the same git repo referenced earlier. There is a configuration file to spin up a K3d cluster with the tweaks that are needed.

# If you haven't already, clone the repo.
$ git clone git@github.com:TheFutonEng/k3d-article.git
<< OMITTED >>

$ cd k3d-article/haproxy

$ k3d cluster create --config k3d_config.yaml

Let’s dig into the k3d_config.yaml configuration file to understand what it’s doing (more details about the K3d configuration file can be found here).

apiVersion: k3d.io/v1alpha4
kind: Simple
metadata:
name: test-haproxy
servers: 1
agents: 2
ports:
- port: 8083:80
nodeFilters:
- loadbalancer
options:
k3s:
extraArgs:
- arg: "--disable=traefik"
nodeFilters:
- server:*

There are a couple of items to highlight here:

  • The name of this cluster is going to be test-haproxy as denoted by metadata.name
  • This cluster is going to have one control plane node (server) and two worker nodes (agents)
  • As has been done previously, port 8083 is going to be mapped from the host to port 80 on the nodes that match the loadbalancer nodefilter
  • Arguments are passed onto K3s to disable Traefik under options.k3s

The ingress controller that we’re going to test here is HAProxy. Let’s install it via Helm:

$ helm repo add haproxytech https://haproxytech.github.io/helm-charts
"haproxytech" has been added to your repositories

$ kubectl create ns haproxy-controller
namespace/haproxy-controller created

$ helm install haproxy-kubernetes-ingress haproxytech/kubernetes-ingress \
-n haproxy-controller -f values.yaml
<<OMITTED>>

The values.yaml file warrants further exploration:

controller:
kind: DaemonSet
daemonset:
useHostPort: true

service:
enabled: false

By default, HAProxy is deployed as a Kubernetes Deployment. Switching to a DaemonSet means one HAProxy ingress controller will be installed on every node. The useHostPort parameter means that HAProxy will bind to ports 80/443/1024 based on the default values in the main values file. Port 80 is what K3d is going to forward traffic to based on its configuration file. And finally, with HAProxy using hostPorts to get traffic to the pods, a service fronting HAProxy is not necessary, so service.enabled is set to false.

Let’s spin up a similar workload from the repo. Traefik has a custom ingress type, as we saw, HAProxy uses the Kubernetes default ingress object:

$ kubectl apply -f haproxy/manifests/
ingress.networking.k8s.io/nginx-ingress created
pod/nginx-pod created
service/nginx-service created

With that all covered, let’s test. Remember, port 8083 was mapped from the host into the cluster in the K3d configuration file so that’s what we target. Remember, the nginx.local name has been set to point to the Linux host in /etc/hosts on my Mac.

Browser session to http://localhost:8083 showing the default Nginx home page.

Success!

Wrap Up

I hope you had some fun with K3d and learned some things along the way.

--

--

Rob Mengert

I'm just a dude in the world. I enjoy cloud computing, good beer and spending quality time with my family.