Tailscale - first met
I was looking for a VPN solution to make connectivity between my devices and my self-hosted Kubernetes cluster running in the cloud. The main requirement was to not expose anything to the Internet like in a model with a VPN concentrator and every device had connected aka mesh connectivity. After a few phrases in a Google search, I found two candidates the first one Tailscale and the second one Zerotier. I was wondering how each of them is working and what it provides. The main requirement each of them fulfills. I started analyzing from Tailscale and I found a pretty well-written explanation of the details. So I gave it a chance and it was a nice experience, the installation went smoothly. I’m not going to present all the details, because I didn’t do any fancy things. I will only concentrate on a few ideas that I saw after using it at the beginning:
- personal edition is good enough for my needs
- official docs on how to make connectivity from/to the Kubernetes cluster
- tailscale daemon running in Linux is written in golang and its code is available in the Github
- every device connected to your own tailnet has its own DNS (Magic DNS) name that can be customized, it will be nice to add some custom DNS entries when your device is serving access to other hosts behind it
- a growing number of services like SSH. Tailscale daemon running in your device can run an SSH server (right now only Linux) it doesn’t interfere with your locally running one. The tailscale SSH server is only being used when you connect to it through the tailnet, you can use your tailscale credential to get access to the host and control access to it through Tailscale ACL. More info
- Terraform provider
- when one of the devices in your tailnet is serving access to other hosts behind it aka subnet router you can accept those routes on your end, which is required on a Linux machine:
$ sudo tailscale up --accept-routes
but when you start looking at your routing table there is no info about:
$ ip route | grep tailscale
I was confused, so how it is done, then I remembered long unused policy routing in Linux, so let’s try to start looking for the rules which decide which routing tables are taken into the account:
$ sudo ip rule list
0: from all lookup local
5210: from all fwmark 0x80000/0xff0000 lookup main
5230: from all fwmark 0x80000/0xff0000 lookup default
5250: from all fwmark 0x80000/0xff0000 unreachable
5270: from all lookup 52
32766: from all lookup main
32767: from all lookup default
$ sudo ip route show table 52
100.77.230.72 dev tailscale0
100.100.100.100 dev tailscale0
100.110.186.92 dev tailscale0
192.168.100.202 dev tailscale0
routing table 52 has a higher priority than the main one. What about adding a route to other hosts in the 192.168.100.0/24:
$ sudo ip route add 192.168.100.1 dev tailscale0 table 52
fortunately, tailscale is not passing this traffic saying that no associated peer node
.
Next time I will take a look at how tailscale is working in the Kubernetes cluster.
powered by Hugo and Noteworthy theme