I wrote some posts about how kubernetes is interacting with docker at CRI level, but what about networking ? Docker has developed it's own solution called CNM to maintain network, to interact with it just use commands under docker network. How it looks like in k8s multi node environment:

# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
ca3e40762c13        bridge              bridge              local
ae38ae125258        host                host                local
d22e6da563ab        none                null                local

what's stand under network id d22e6da563ab ?

# docker network inspect d22e6da563ab
...
        "Containers": {
            "03a8117bf896bbd93340382fc71093686f25def0739431aa7c6a4ca8bb0a6102": {
...

it seems that most of the containers sit here. So why it's using driver null it's because k8s didn't decide to use docker CNM in favour of CNI. Null driver only provides loopback device. In this blog post I won't describe CNI, but I will try to achieve more or less the same by connecting selected container into bridged network using system tools. Create linux network bridge and bind ip address:

# brctl addbr brtest
# ip link set brtest up
# ip addr add 192.168.101.1/24 dev brtest

create sample container with none network:

# docker run -d --name cni --net=none busybox:latest sleep 86400
# docker exec cni sh -c "ip a"
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever

docker create linux network namespace with new container, to get list of network namespaces, just provide correct path for iproute2 commands:

# ln -s /var/run/docker/netns/ /var/run/netns
# ip netns list
5c35ec636755

now we knew which namespace is used, using that name we can create pair of interfaces veth one for host and one for container:

# ip link add v11 type veth peer name v12 netns 5c35ec636755
# ip a show dev v11
5: v11@if2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 0a:fe:9d:1d:6a:2f brd ff:ff:ff:ff:ff:ff link-netnsid 0
# ip netns exec 5c35ec636755 ip a show dev v12
2: v12@if5: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether be:aa:c5:68:b6:3c brd ff:ff:ff:ff:ff:ff link-netnsid 0
# ip link set v11 up

add ip address for interface in container:

# ip netns exec 5c35ec636755 ip link set v12 up
# ip netns exec 5c35ec636755 ip addr add 192.168.101.3/24 dev v12

now connect host endpoint of veth pair into bridge brtest:

# brctl addif brtest v11

make some tests:

# docker exec cni sh -c "ping -c 3 192.168.101.1"
PING 192.168.101.1 (192.168.101.1): 56 data bytes
64 bytes from 192.168.101.1: seq=0 ttl=64 time=0.061 ms
64 bytes from 192.168.101.1: seq=1 ttl=64 time=0.107 ms
64 bytes from 192.168.101.1: seq=2 ttl=64 time=0.113 ms

--- 192.168.101.1 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.061/0.093/0.113 ms

communication is established between container and host. It's pretty close to what do CNI bridge plugin, I will take a closer look in next blog post.


dropwatch - discover where network packets are dropped

Sat 30 May 2020 by admin

Let's imagine situation where you experience network problem with dropping packets and you've no idea where the problem is located. So first of all prepare environment:

# iptables -A OUTPUT -p icmp -j DROP
# ping -c 3 -W 1 8.8.8.8
PING 8.8.8.8 (8 ...
read more

mkubectx - single command across all your selected kubernetes contexts

Sun 10 May 2020 by admin

Pause - most popular container in k8s environment

Sat 02 May 2020 by admin

Probably the most popular container in kubernetes environment. Container image is really small:

$ docker images | grep -i pause
k8s.gcr.io/pause                          3.2                 80d28bedfe5d        2 months ago        683kB

Codebase is also small pause. According to source code it is responsible for doing pretty... nothing, except of dealing ...

read more

How to change default k8s container runtime to containerd ?

Sun 26 April 2020 by admin

Default kubelet container runtime configuration use docker as CRI. Containerd has another implementation of CRI, it should fullfill the same requirements as docker so ie. it's responsible to maintain container images. Containerd extends its functionality by using plugins, one of them is cri. To interact with containerd we can ...

read more

How kubernetes is interacting with docker ?

Sun 19 April 2020 by admin

In a few words Container Runtime Interface is the answer to this questions. But we are going a little bit deeper. First of all what is CRI ? CRI is one of the most mature interface in kubernetes, it's a bridge between kubelet and container runtime. k8-diagram. Creating such interfaces in ...

read more

Kubectl - writing your own plugin

Sat 28 March 2020 by admin

Kubectl is an entrypoint for maintaing k8s clusters, you can find a lot useful switches to extract data. For example to get all containers images, just use:

$ kubectl get pods --all-namespaces -o go-template --template="{{range .items}}{{range .spec.containers}}{{printf \"%s\n\" .image}}{{end}}{{end}}"

Lots of switches ...

read more

Growpart

Sun 17 June 2018 by admin

Easy alternative to fdisk and partprobe when resizing partition to its maximum. Extensively used in cloud environments (i.e. cloud-init).

https://www.systutorials.com/docs/linux/man/1-growpart/

read more

Sysdig Tracers

Sun 03 June 2018 by admin

Tracers is nice extension to one of my favourite tool sysdig to troubleshooting problems. Using damn simple approach of writing tags to /dev/null, give an idea about your app's health. More performing than popular statsd to measure duration, because of low overhead ca. 1 microsecond per tracer. Moreover ...

read more

HTTP Server-Timing

Sun 20 May 2018 by admin

One of the easiest way to visualize your app internal performance metrics on demand is too use pretty new standard described in:

https://www.w3.org/TR/server-timing/

combined it with for example Chrome Developers Tools from version 65. More practically with Python Flask app:

 1
 2
 3
 4
 5 ...
read more