Sunday, May 7, 2023

k3d walkthrough - does it replace minikube?

What is k3d?

k3d is a lightweight Kubernetes, which can be installed on your notebook, or low-power devices to test Kubernetes nodes and clusters. 

Problems with minikube

Previously I was happily using Minikube, an alternative single-node cluster for development. But I found some issues like port-forwarding, and not restarting after the host system restarts. I did a lot of research, and I found some solutions for that like these (to access Minikube from any host in the home network.)

kubectl port-forward 

minikube tunnel 

These solutions worked to some extent, but I have noticed they created a lot of unnecessary processes when I checked with "htop" command. 

I tried installing Nginx and did reverse-proxy to forward the traffic into the ingress load balancer. I was somehow happy with this. Also, I could even directly provide minikube ip address as the local service in cloudflare tunnel configuration. So, everything was working like a charm. However when I restarted my host system where minikube was running, I saw Minikube was stopped, and when I tried to start again, I got errors. For that, I had to stop and delete Minikube and redeploy all manifests which is tedious. 


Why k3d

I compared Minikube with Kind and k3d, I have chosen k3d because it is lightweight and very easy to install. 


Installation

Installation of k3d is very simple. Just run the following command:

curl -s https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | bash 

(Pre-requisite: docker should be installed, can be checked with docker --version)

Now k3d can be directly tested.

Create clusters and nodes

k3d cluster create tkb --servers 1 --agents 3 --image rancher/k3s:latest

kubectl cluster-info

k3d cluster list

k3d cluster delete tkb

Install ingress 

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud/deploy.yaml

Now the cool thing is, we can forward traffic from the host system into the ingress load balancer

k3d cluster create tkb -p "80:80@loadbalancer" --servers 1 --agents 3

Here, calling port 80 in the host will directly access the load balancer. To access hostnames, we need to write the hostnames information in "/etc/hosts" file in Linux. Or you can simply use central pi-hole dns configuration). We can simply call the services using hostnames

http://tomcat.kpaudel.lab

If the record of tomcat.kpaudel.lab exists in "/etc/hosts" file with the IP of the host system running k3d, we can get the tomcat service.  We dont need any reverse proxy. This actually worked in Cloudflare tunnel configuration too, I just need to provide the host IP address (192.168.1.XXX), not the cluster IP address. 


Friday, May 5, 2023

Make k8s services available globally

 Solution from cloudflare

Create account 

https://www.cloudflare.com/ 

Create a web application and set up your domain. 

Update the nameservers by logging domain registration page. (register.com.np) 

It will take about 24 hours to update your nameserver records until you can start further testing. 


Create tunnel

Zero Trust => Access=> Tunnel and click create tunnel. Provide the tunnel name. 

Then select docker and copy the docker command to run on your server.  The recommended way is to create compose file and set the token as an environmental variable because the token should be very secure.  (.bashrc is one of the places where environmental variables are stored)

docker-compose.yaml

ersion: '3.0'

networks:
minikube:
external: true

services:
cloudflaretunnel:
container_name: cloudflaretunnel-demo-1
image: cloudflare/cloudflared:latest
restart: unless-stopped
environment:
- TUNNEL_TOKEN=$CLOUDFLARE_TUNNEL_TOKEN
command: tunnel --no-autoupdate run
networks:
- minikube

Run this with "docker compose up", and that's it, tunnel creation is done. Now, Cloudflare can forward the traffic from this container to the services running on the server. 

Now, check the created tunnel, if it is shown as "Healthy", we are sure that everything is working so far. 

Now we configure the tunnel. We define the public hostnames and map the services in the local server. 

We provide the ingress ip address in Service URL, with type HTTP. 


In ingress ruleset, this hostname should be configured. And clicking "save hostname", we see the magic, the public hostname "subdomain.domain" will access the service running in your local network. 


So, you don't need to configure anything, no port forwarding, no router settings, no static ip adress. This solution I was looking for 7 years, now I have got it. 

Credit goes to this guy
https://www.youtube.com/watch?v=yMmxw-DZ5Ec 


Accessing k8s services in home network

Challenge: to access Kubernetes services from a remote computer in LAN.

>> Ingress configure

>> Forward traffic from the host system to ingress (using Nginx)

>> We need to use subdomains to avoid Link problems in the app.  For example: 

    kpaudel.com/tomcat will successfully open tomcat, but the button link in the tomcat itself does not point to the correct URL. 

So, instead, we define subdomains(for example tomcat.kpaudel.com). So, we could have many subdomains to point to the same address. Some mechanisms to implement wildcard DNS. 

>> Wildcard dns (multiple domains pointing to the same IP address)

WildCart DNS in Pi-hole

https://hetzbiz.cloud/2022/03/04/wildcard-dns-in-pihole/ 

(Alternative: create your own bind9+docker)