Default kubelet container runtime configuration use docker as CRI. Containerd has another implementation of CRI, it should fullfill the same requirements as docker so ie. it's responsible to maintain container images. Containerd extends its functionality by using plugins, one of them is cri. To interact with containerd we can use its builtin tool called ctr, so to get list of all the plugins:

# ctr plugins list
TYPE                            ID                    PLATFORMS      STATUS
io.containerd.content.v1        content               -              ok
io.containerd.snapshotter.v1    btrfs                 linux/amd64    error
io.containerd.snapshotter.v1    aufs                  linux/amd64    ok
io.containerd.snapshotter.v1    native                linux/amd64    ok
io.containerd.snapshotter.v1    overlayfs             linux/amd64    ok
io.containerd.snapshotter.v1    zfs                   linux/amd64    error
io.containerd.metadata.v1       bolt                  -              ok
io.containerd.differ.v1         walking               linux/amd64    ok
io.containerd.gc.v1             scheduler             -              ok
io.containerd.service.v1        containers-service    -              ok
io.containerd.service.v1        content-service       -              ok
io.containerd.service.v1        diff-service          -              ok
io.containerd.service.v1        images-service        -              ok
io.containerd.service.v1        leases-service        -              ok
io.containerd.service.v1        namespaces-service    -              ok
io.containerd.service.v1        snapshots-service     -              ok
io.containerd.runtime.v1        linux                 linux/amd64    ok
io.containerd.runtime.v2        task                  linux/amd64    ok
io.containerd.monitor.v1        cgroups               linux/amd64    ok
io.containerd.service.v1        tasks-service         -              ok
io.containerd.grpc.v1           containers            -              ok
io.containerd.grpc.v1           content               -              ok
io.containerd.grpc.v1           diff                  -              ok
io.containerd.grpc.v1           events                -              ok
io.containerd.grpc.v1           healthcheck           -              ok
io.containerd.grpc.v1           images                -              ok
io.containerd.grpc.v1           leases                -              ok
io.containerd.grpc.v1           namespaces            -              ok
io.containerd.internal.v1       opt                   -              ok
io.containerd.grpc.v1           snapshots             -              ok
io.containerd.grpc.v1           tasks                 -              ok
io.containerd.grpc.v1           version               -              ok
io.containerd.grpc.v1           cri                   linux/amd64    ok

Another concept behind containerd is concept of namespaces, namespace separate resource like container, images etc. It's pretty useful because docker since version 1.11 become OCI complaint and it also use containerd to schedule new container just like kubelet, but they're using different namespaces so there is no collision of using k8s and docker separately on the same machine. So let's get back to kubelet and containerd, to make things working, first we should ensure that containerd is installed then we prepare basic configuration:

# cat /etc/containerd/config.toml
# Kubernetes doesn't use containerd restart manager.
disabled_plugins = ["restart"]

[debug]
  level = ""

[grpc]
    max_recv_message_size = 16777216
    max_send_message_size = 16777216

[plugins.linux]
  shim = "/usr/bin/containerd-shim"
  runtime = "/usr/bin/runc"

[plugins.cri]
  stream_server_address = "127.0.0.1"
  max_container_log_line_size = -1
  sandbox_image = "k8s.gcr.io/pause:3.1"

[plugins.cri.cni]
  bin_dir = "/opt/cni/bin"
  conf_dir = "/etc/cni/net.d"
  conf_template = ""

[plugins.cri.containerd.untrusted_workload_runtime]
  runtime_type = ""
  runtime_engine = ""
  runtime_root = ""

[plugins.cri.registry]
[plugins.cri.registry.mirrors]
[plugins.cri.registry.mirrors."docker.io"]
  endpoint = ["https://registry-1.docker.io"]

When containerd is already running we should make some changes to kubelet by adding two new parameters:

--container-runtime=remote
--container-runtime-endpoint=/var/run/containerd/containerd.sock

after restarting kubelet we see that new namespace shows up:

# ctr namespace list
NAME   LABELS
k8s.io
moby

moby namespace is created by docker, k8s.io is created by kubelet. Required pods are being scheduled:

# ctr --namespace k8s.io container list
CONTAINER                                                           IMAGE                                                                                       RUNTIME
1d8f61d5bdc6b9abef0f012c15c5b416c371c70703842c5b65d59575ac51f123    docker.io/calico/cni:v3.4.0                                                                 io.containerd.runtime.v1.linux
...

Docker has a lot of helper switches ie. docker exec... to fullfill the gap of there is a project called crictl. It contain lot of useful switch and knew the concept of pods behind. To make it working just use this sample config:

# cat /etc/crictl.yaml
runtime-endpoint: unix:///var/run/containerd/containerd.sock
image-endpoint: unix:///var/run/containerd/containerd.sock
timeout: 30
debug: false
# crictl ps
CONTAINER           IMAGE               CREATED             STATE               NAME                    ATTEMPT             POD ID
cedb24498cf43       a89b45f36d5ef       About an hour ago   Running             calico-node             0                   4d08b8ddcee8b
7c944ed3e89e0       ed5e65eb295ed       About an hour ago   Running             speaker                 0                   4f97fe029a26a
...