CNI is simple interface based on environment variables and JSON config. Both environment variables and JSON config are processed by CNI plugin started by container management system like Kubernetes. CNI plugin is responsible for connecting container namespace into selected network, second responsibility for CNI is to provide IP address it is not builtin main CNI plugin but in IPAM plugin. IPAM plugin behave the same like CNI plugin waiting for environment variables and JSON config. Idea of CNI comes from CoreOS, now it's maintained by Cloud Native Computing Foundation, main repo is available in github, which includes specification, libraries and bunch of reference plugins. As I mentioned before it's simple interface, so we can even try to use bash script as CNI plugin ie. bash-cni-plugin to provide support for coming json configuration from stdin, processing selected environment variables and outputting result into json format. Now I will try to achieve the same result like in previous blog posts by connecting container to bridged network, but now using standard CNI plugin bridge:

# cat ../etc/bridge.conf
{
    "cniVersion": "0.3.1",
    "name": "mynet",
    "type": "bridge",
    "bridge": "br1",
    "isDefaultGateway": true,
    "forceAddress": false,
    "ipMasq": true,
    "hairpinMode": true,
    "ipam": {
        "type": "host-local",
        "subnet": "192.168.100.0/24",
    "rangeStart": "192.168.100.100",
    "rangeEnd": "192.168.100.200",
    "gateway": "192.168.100.1"
    }
}
# docker run -d --name cni --net=none busybox:latest sleep 86400
# docker inspect cni | egrep -i "netns|networkid"
    "SandboxKey": "/var/run/docker/netns/a55f04b198dc",
        "NetworkID":"CNI_CONTAINERID=a55f04b198dc657403cdfbc03d5df89d31642a57c2f6c5b627ee9408ee4c1509",
# CNI_COMMAND=ADD CNI_CONTAINERID=a55f04b198dc657403cdfbc03d5df89d31642a57c2f6c5b627ee9408ee4c1509 CNI_NETNS=/var/run/docker/netns/a55f04b198dc CNI_IFNAME=eth0 CNI_PATH=`pwd` ./bridge < ../etc/bridge.conf
{
    "cniVersion": "0.3.1",
    "interfaces": [
        {
            "name": "br1",
            "mac": "fa:21:87:f5:ad:30"
        },
        {
            "name": "veth548c52c8",
            "mac": "3e:b4:41:9a:cf:d5"
        },
        {
            "name": "eth0",
            "mac": "1e:c6:6a:09:05:38",
            "sandbox": "/var/run/docker/netns/a55f04b198dc"
        }
    ],
    "ips": [
        {
            "version": "4",
            "interface": 2,
            "address": "192.168.100.105/24",
            "gateway": "192.168.100.1"
        }
    ],
    "routes": [
        {
            "dst": "0.0.0.0/0",
            "gw": "192.168.100.1"
        }
    ],
    "dns": {}
}
# docker exec -ti cni sh -c "ip a"
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
3: eth0@if5: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
    link/ether 1e:c6:6a:09:05:38 brd ff:ff:ff:ff:ff:ff
    inet 192.168.100.105/24 brd 192.168.100.255 scope global eth0
       valid_lft forever preferred_lft forever
# docker exec cni sh -c "ping -c 3 192.168.100.1"
PING 192.168.100.1 (192.168.100.1): 56 data bytes
64 bytes from 192.168.100.1: seq=0 ttl=64 time=0.122 ms
64 bytes from 192.168.100.1: seq=1 ttl=64 time=0.062 ms
64 bytes from 192.168.100.1: seq=2 ttl=64 time=0.072 ms

--- 192.168.100.1 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
# CNI_COMMAND=DEL CNI_CONTAINERID=a55f04b198dc657403cdfbc03d5df89d31642a57c2f6c5b627ee9408ee4c1509 CNI_NETNS=/var/run/docker/netns/a55f04b198dc CNI_IFNAME=eth0 CNI_PATH=`pwd` ./bridge < ../etc/bridge.conf
# docker exec -ti cni sh -c "ip a"
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever

How kubernetes is interacting with CNI plugins ? Kubelet is responsible for creating pods/containers and attaching them to selected network, basic configuration contain a few pretty self explanatory switches:

--network-plugin=cni
--cni-conf-dir=/etc/cni/net.d
--cni-bin-dir=/opt/cni/bin

so choose your CNI plugin, install it in /opt/cni/bin and put its configuration into /etc/cni/net.d. One of the most popular CNI plugin are:

Under the hood kubelet is doing simple CNI commands but through dedicated library in this example ADD command cni.go