Docker and Kubernetes at network layer
I wrote some posts about how kubernetes is interacting with docker at CRI level, but what about networking ? Docker has developed it’s own solution called CNM to maintain network, to interact with it just use commands under docker network. How it looks like in k8s multi node environment:
# docker network ls
NETWORK ID NAME DRIVER SCOPE
ca3e40762c13 bridge bridge local
ae38ae125258 host host local
d22e6da563ab none null local
what’s stand under network id d22e6da563ab
?
# docker network inspect d22e6da563ab
...
"Containers": {
"03a8117bf896bbd93340382fc71093686f25def0739431aa7c6a4ca8bb0a6102": {
...
it seems that most of the containers sit here. So why it’s using driver null
it’s because k8s didn’t decide to use docker CNM
in favour of CNI
. Null
driver only provides loopback device. In this blog post I won’t describe CNI
, but I will try to achieve more or less the same by connecting selected container into bridged network using system tools. Create linux network bridge and bind ip address:
# brctl addbr brtest
# ip link set brtest up
# ip addr add 192.168.101.1/24 dev brtest
create sample container with none
network:
# docker run -d --name cni --net=none busybox:latest sleep 86400
# docker exec cni sh -c "ip a"
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
docker create linux network namespace with new container, to get list of network namespaces, just provide correct path for iproute2 commands:
# ln -s /var/run/docker/netns/ /var/run/netns
# ip netns list
5c35ec636755
now we knew which namespace is used, using that name we can create pair of interfaces veth one for host and one for container:
# ip link add v11 type veth peer name v12 netns 5c35ec636755
# ip a show dev v11
5: v11@if2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 0a:fe:9d:1d:6a:2f brd ff:ff:ff:ff:ff:ff link-netnsid 0
# ip netns exec 5c35ec636755 ip a show dev v12
2: v12@if5: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether be:aa:c5:68:b6:3c brd ff:ff:ff:ff:ff:ff link-netnsid 0
# ip link set v11 up
add ip address for interface in container:
# ip netns exec 5c35ec636755 ip link set v12 up
# ip netns exec 5c35ec636755 ip addr add 192.168.101.3/24 dev v12
now connect host endpoint of veth pair into bridge brtest
:
# brctl addif brtest v11
make some tests:
# docker exec cni sh -c "ping -c 3 192.168.101.1"
PING 192.168.101.1 (192.168.101.1): 56 data bytes
64 bytes from 192.168.101.1: seq=0 ttl=64 time=0.061 ms
64 bytes from 192.168.101.1: seq=1 ttl=64 time=0.107 ms
64 bytes from 192.168.101.1: seq=2 ttl=64 time=0.113 ms
--- 192.168.101.1 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.061/0.093/0.113 ms
communication is established between container and host. It’s pretty close to what do CNI bridge plugin, I will take a closer look in next blog post.
powered by Hugo and Noteworthy theme