Skip to content
This repository was archived by the owner on Mar 9, 2022. It is now read-only.
This repository was archived by the owner on Mar 9, 2022. It is now read-only.

'failed to reserve sandbox name' error after hard reboot #1014

@steven-sheehy

Description

@steven-sheehy

After a VM running Kubernetes became completely unresponsive, I had to forcefully restart the VM. Upon reboot, containerd fails to start due to the below error:

Jan 02 17:13:12 node1 systemd[1]: Starting containerd container runtime...
Jan 02 17:13:12 node1 systemd[1]: Started containerd container runtime.
Jan 02 17:13:12 node1 containerd[80463]: time="2019-01-02T17:13:12.977550870Z" level=info msg="starting containerd" revision=9b32062dc1f5a7c2564315c269b5059754f12b9d version=v1.2.1
Jan 02 17:13:12 node1 containerd[80463]: time="2019-01-02T17:13:12.978484921Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1
Jan 02 17:13:12 node1 containerd[80463]: time="2019-01-02T17:13:12.978553204Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1
Jan 02 17:13:12 node1 containerd[80463]: time="2019-01-02T17:13:12.978900731Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
Jan 02 17:13:12 node1 containerd[80463]: time="2019-01-02T17:13:12.979456699Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1
Jan 02 17:13:12 node1 containerd[80463]: time="2019-01-02T17:13:12.982381225Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1
Jan 02 17:13:12 node1 containerd[80463]: time="2019-01-02T17:13:12.982466667Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1
Jan 02 17:13:12 node1 containerd[80463]: time="2019-01-02T17:13:12.982672547Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1
Jan 02 17:13:12 node1 containerd[80463]: time="2019-01-02T17:13:12.983064870Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.zfs" error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter"
Jan 02 17:13:12 node1 containerd[80463]: time="2019-01-02T17:13:12.983127339Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1
Jan 02 17:13:12 node1 containerd[80463]: time="2019-01-02T17:13:12.983169107Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter"
Jan 02 17:13:12 node1 containerd[80463]: time="2019-01-02T17:13:12.983191913Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter"
Jan 02 17:13:12 node1 containerd[80463]: time="2019-01-02T17:13:12.983546460Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1
Jan 02 17:13:12 node1 containerd[80463]: time="2019-01-02T17:13:12.983593858Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1
Jan 02 17:13:12 node1 containerd[80463]: time="2019-01-02T17:13:12.983686180Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1
Jan 02 17:13:12 node1 containerd[80463]: time="2019-01-02T17:13:12.983722181Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1
Jan 02 17:13:12 node1 containerd[80463]: time="2019-01-02T17:13:12.983757936Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1
Jan 02 17:13:12 node1 containerd[80463]: time="2019-01-02T17:13:12.983789780Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1
Jan 02 17:13:12 node1 containerd[80463]: time="2019-01-02T17:13:12.983820969Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1
Jan 02 17:13:12 node1 containerd[80463]: time="2019-01-02T17:13:12.983852236Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1
Jan 02 17:13:12 node1 containerd[80463]: time="2019-01-02T17:13:12.983883538Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
Jan 02 17:13:12 node1 containerd[80463]: time="2019-01-02T17:13:12.983914027Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
Jan 02 17:13:12 node1 containerd[80463]: time="2019-01-02T17:13:12.984036581Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
Jan 02 17:13:12 node1 containerd[80463]: time="2019-01-02T17:13:12.984143655Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
Jan 02 17:13:12 node1 containerd[80463]: time="2019-01-02T17:13:12.984907233Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
Jan 02 17:13:12 node1 containerd[80463]: time="2019-01-02T17:13:12.984971985Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
Jan 02 17:13:12 node1 containerd[80463]: time="2019-01-02T17:13:12.985063262Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
Jan 02 17:13:12 node1 containerd[80463]: time="2019-01-02T17:13:12.985100153Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
Jan 02 17:13:12 node1 containerd[80463]: time="2019-01-02T17:13:12.985130588Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
Jan 02 17:13:12 node1 containerd[80463]: time="2019-01-02T17:13:12.985174346Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
Jan 02 17:13:12 node1 containerd[80463]: time="2019-01-02T17:13:12.985205031Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
Jan 02 17:13:12 node1 containerd[80463]: time="2019-01-02T17:13:12.985235589Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
Jan 02 17:13:12 node1 containerd[80463]: time="2019-01-02T17:13:12.985264565Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
Jan 02 17:13:12 node1 containerd[80463]: time="2019-01-02T17:13:12.985294177Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
Jan 02 17:13:12 node1 containerd[80463]: time="2019-01-02T17:13:12.985323733Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
Jan 02 17:13:12 node1 containerd[80463]: time="2019-01-02T17:13:12.985410854Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
Jan 02 17:13:12 node1 containerd[80463]: time="2019-01-02T17:13:12.985448316Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
Jan 02 17:13:12 node1 containerd[80463]: time="2019-01-02T17:13:12.985478964Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
Jan 02 17:13:12 node1 containerd[80463]: time="2019-01-02T17:13:12.985508762Z" level=info msg="loading plugin "io.containerd.grpc.v1.cri"..." type=io.containerd.grpc.v1
Jan 02 17:13:12 node1 containerd[80463]: time="2019-01-02T17:13:12.985595695Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntime:{Type:io.containerd.runtime.v1.linux Engine: Root: Options:<nil>} UntrustedWorkloadRuntime:{Type: Engine: Root: Options:<nil>} Runtimes:map[] NoPivot:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginConfTemplate:} Registry:{Mirrors:map[docker.io:{Endpoints:[https://registry-1.docker.io]}] Auths:map[]} StreamServerAddress:127.0.0.1 StreamServerPort:0 EnableSelinux:false SandboxImage:k8s.gcr.io/pause:3.1 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Jan 02 17:13:12 node1 containerd[80463]: time="2019-01-02T17:13:12.985709970Z" level=info msg="Connect containerd service"
Jan 02 17:13:12 node1 containerd[80463]: time="2019-01-02T17:13:12.986385131Z" level=info msg="Get image filesystem path "/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs""
Jan 02 17:13:12 node1 containerd[80463]: time="2019-01-02T17:13:12.987497830Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
Jan 02 17:13:12 node1 containerd[80463]: time="2019-01-02T17:13:12.987993570Z" level=info msg=serving... address="/run/containerd/containerd.sock"
Jan 02 17:13:12 node1 containerd[80463]: time="2019-01-02T17:13:12.988296640Z" level=info msg="Start subscribing containerd event"
Jan 02 17:13:12 node1 containerd[80463]: time="2019-01-02T17:13:12.988720026Z" level=info msg="Start recovering state"
Jan 02 17:13:12 node1 containerd[80463]: time="2019-01-02T17:13:12.989711956Z" level=info msg="containerd successfully booted in 0.012843s"
Jan 02 17:13:13 node1 containerd[80463]: time="2019-01-02T17:13:13.089144500Z" level=debug msg="Loaded sandbox {Metadata:{ID:02ac99a0d183c1e913c228811d101b217d5286f9086e0a0316cf0ee437348b6d Name:grafana-769bc56dd-wwlkz_production_17dc6382-03d8-11e9-97aa-005056911476_0 Config:&PodSandboxConfig{Metadata:&PodSandboxMetadata{Name:grafana-769bc56dd-wwlkz,Uid:17dc6382-03d8-11e9-97aa-005056911476,Namespace:production,Attempt:0,},Hostname:grafana-769bc56dd-wwlkz,LogDirectory:/var/log/pods/17dc6382-03d8-11e9-97aa-005056911476,DnsConfig:&DNSConfig{Servers:[10.96.0.10],Searches:[production.svc.cluster.local svc.cluster.local cluster.local firescope.int],Options:[ndots:5],},PortMappings:[&PortMapping{Protocol:TCP,ContainerPort:9001,HostPort:0,HostIp:,} &PortMapping{Protocol:TCP,ContainerPort:3000,HostPort:0,HostIp:,}],Labels:map[string]string{app: grafana,io.kubernetes.pod.name: grafana-769bc56dd-wwlkz,io.kubernetes.pod.namespace: production,io.kubernetes.pod.uid: 17dc6382-03d8-11e9-97aa-005056911476,pod-template-hash: 325671288,release: edge,},Annotations:map[string]string{kubernetes.io/config.seen: 2018-12-19T21:50:26.710123356Z,kubernetes.io/config.source: api,},Linux:&LinuxPodSandboxConfig{CgroupParent:/kubepods/burstable/pod17dc6382-03d8-11e9-97aa-005056911476,SecurityContext:&LinuxSandboxSecurityContext{NamespaceOptions:&NamespaceOption{Network:POD,Pid:CONTAINER,Ipc:POD,},SelinuxOptions:nil,RunAsUser:&Int64Value{Value:472,},ReadonlyRootfs:false,SupplementalGroups:[472],Privileged:false,SeccompProfilePath:,RunAsGroup:nil,},Sysctls:map[string]string{},},} NetNSPath:/var/run/netns/cni-cac8ebaf-436e-54fb-8fe8-14768ce89b60 IP:10.214.128.56 RuntimeHandler:} Status:0xc42003dfc0 Container:0xc4200ece60 NetNS:0xc42039a6a0 StopCh:0xc42001a120}"
Jan 02 17:13:13 node1 containerd[80463]: time="2019-01-02T17:13:13.089713206Z" level=debug msg="Loaded sandbox {Metadata:{ID:02b3441253939c078d100ac6a5a5c63e5bb308d5904c36b3903ddd6b800e5c07 Name:prometheus-85dfb696c7-sm9g8_production_2302e97a-03d2-11e9-9447-005056911476_0 Config:&PodSandboxConfig{Metadata:&PodSandboxMetadata{Name:prometheus-85dfb696c7-sm9g8,Uid:2302e97a-03d2-11e9-9447-005056911476,Namespace:production,Attempt:0,},Hostname:prometheus-85dfb696c7-sm9g8,LogDirectory:/var/log/pods/2302e97a-03d2-11e9-9447-005056911476,DnsConfig:&DNSConfig{Servers:[10.96.0.10],Searches:[production.svc.cluster.local svc.cluster.local cluster.local firescope.int],Options:[ndots:5],},PortMappings:[&PortMapping{Protocol:TCP,ContainerPort:9090,HostPort:0,HostIp:,}],Labels:map[string]string{app: prometheus,chart: prometheus-8.1.2,component: server,heritage: Tiller,io.kubernetes.pod.name: prometheus-85dfb696c7-sm9g8,io.kubernetes.pod.namespace: production,io.kubernetes.pod.uid: 2302e97a-03d2-11e9-9447-005056911476,pod-template-hash: 4189625273,release: edge,},Annotations:map[string]string{kubernetes.io/config.seen: 2018-12-19T21:23:11.535355976Z,kubernetes.io/config.source: api,},Linux:&LinuxPodSandboxConfig{CgroupParent:/kubepods/burstable/pod2302e97a-03d2-11e9-9447-005056911476,SecurityContext:&LinuxSandboxSecurityContext{NamespaceOptions:&NamespaceOption{Network:POD,Pid:CONTAINER,Ipc:POD,},SelinuxOptions:nil,RunAsUser:nil,ReadonlyRootfs:false,SupplementalGroups:[],Privileged:false,SeccompProfilePath:,RunAsGroup:nil,},Sysctls:map[string]string{},},} NetNSPath:/var/run/netns/cni-f876ceaf-90c5-b3d1-5c2a-38b0178a1763 IP:10.214.128.42 RuntimeHandler:} Status:0xc4202722c0 Container:0xc4200ece80 NetNS:0xc42072c280 StopCh:0xc4206fd2e0}"
Jan 02 17:13:13 node1 containerd[80463]: time="2019-01-02T17:13:13.090115612Z" level=debug msg="Loaded sandbox {Metadata:{ID:095fa5bfdd75e411290d11e90ec84bc2dec87b4e15d5c6215761b2518c5f8683 Name:kube-scheduler-node1_kube-system_705e7ce1217a37349a5567101e60165d_0 Config:&PodSandboxConfig{Metadata:&PodSandboxMetadata{Name:kube-scheduler-node1,Uid:705e7ce1217a37349a5567101e60165d,Namespace:kube-system,Attempt:0,},Hostname:,LogDirectory:/var/log/pods/705e7ce1217a37349a5567101e60165d,DnsConfig:&DNSConfig{Servers:[10.0.22.45 1.1.1.1],Searches:[firescope.int],Options:[],},PortMappings:[],Labels:map[string]string{component: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-node1,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 705e7ce1217a37349a5567101e60165d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 705e7ce1217a37349a5567101e60165d,kubernetes.io/config.seen: 2018-12-19T21:18:55.521061971Z,kubernetes.io/config.source: file,scheduler.alpha.kubernetes.io/critical-pod: ,},Linux:&LinuxPodSandboxConfig{CgroupParent:/kubepods/burstable/pod705e7ce1217a37349a5567101e60165d,SecurityContext:&LinuxSandboxSecurityContext{NamespaceOptions:&NamespaceOption{Network:NODE,Pid:CONTAINER,Ipc:POD,},SelinuxOptions:nil,RunAsUser:nil,ReadonlyRootfs:false,SupplementalGroups:[],Privileged:false,SeccompProfilePath:,RunAsGroup:nil,},Sysctls:map[string]string{},},} NetNSPath: IP: RuntimeHandler:} Status:0xc420272b40 Container:0xc4200ecec0 NetNS:<nil> StopCh:0xc4202ee520}"
Jan 02 17:13:13 node1 containerd[80463]: time="2019-01-02T17:13:13.090454590Z" level=debug msg="Loaded sandbox {Metadata:{ID:119984f0ffc088a3afbedc46e3a1f60642e4537204ff21ae6ca999201fe44539 Name:kube-proxy-dnhz7_kube-system_57cb5193-03d2-11e9-9447-005056911476_1 Config:&PodSandboxConfig{Metadata:&PodSandboxMetadata{Name:kube-proxy-dnhz7,Uid:57cb5193-03d2-11e9-9447-005056911476,Namespace:kube-system,Attempt:1,},Hostname:,LogDirectory:/var/log/pods/57cb5193-03d2-11e9-9447-005056911476,DnsConfig:&DNSConfig{Servers:[10.0.22.45 1.1.1.1],Searches:[firescope.int],Options:[],},PortMappings:[],Labels:map[string]string{controller-revision-hash: 4270652902,io.kubernetes.pod.name: kube-proxy-dnhz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57cb5193-03d2-11e9-9447-005056911476,k8s-app: kube-proxy,pod-template-generation: 6,},Annotations:map[string]string{kubernetes.io/config.seen: 2018-12-19T21:22:46.189063942Z,kubernetes.io/config.source: api,scheduler.alpha.kubernetes.io/critical-pod: ,},Linux:&LinuxPodSandboxConfig{CgroupParent:/kubepods/besteffort/pod57cb5193-03d2-11e9-9447-005056911476,SecurityContext:&LinuxSandboxSecurityContext{NamespaceOptions:&NamespaceOption{Network:NODE,Pid:CONTAINER,Ipc:POD,},SelinuxOptions:nil,RunAsUser:nil,ReadonlyRootfs:false,SupplementalGroups:[],Privileged:true,SeccompProfilePath:,RunAsGroup:nil,},Sysctls:map[string]string{},},} NetNSPath: IP: RuntimeHandler:} Status:0xc4202733c0 Container:0xc4200ecee0 NetNS:<nil> StopCh:0xc4202ef4e0}"
Jan 02 17:13:13 node1 containerd[80463]: time="2019-01-02T17:13:13.090823026Z" level=debug msg="Loaded sandbox {Metadata:{ID:139bb0ac7e050e9e28b994e78f651a8609f426f1b5bbfc887a0d4a3350b4eee2 Name:kube-scheduler-node1_kube-system_705e7ce1217a37349a5567101e60165d_2 Config:&PodSandboxConfig{Metadata:&PodSandboxMetadata{Name:kube-scheduler-node1,Uid:705e7ce1217a37349a5567101e60165d,Namespace:kube-system,Attempt:2,},Hostname:,LogDirectory:/var/log/pods/705e7ce1217a37349a5567101e60165d,DnsConfig:&DNSConfig{Servers:[10.0.22.45 1.1.1.1],Searches:[firescope.int],Options:[],},PortMappings:[],Labels:map[string]string{component: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-node1,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 705e7ce1217a37349a5567101e60165d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 705e7ce1217a37349a5567101e60165d,kubernetes.io/config.seen: 2018-12-19T21:18:55.521061971Z,kubernetes.io/config.source: file,scheduler.alpha.kubernetes.io/critical-pod: ,},Linux:&LinuxPodSandboxConfig{CgroupParent:/kubepods/burstable/pod705e7ce1217a37349a5567101e60165d,SecurityContext:&LinuxSandboxSecurityContext{NamespaceOptions:&NamespaceOption{Network:NODE,Pid:CONTAINER,Ipc:POD,},SelinuxOptions:nil,RunAsUser:nil,ReadonlyRootfs:false,SupplementalGroups:[],Privileged:false,SeccompProfilePath:,RunAsGroup:nil,},Sysctls:map[string]string{},},} NetNSPath: IP: RuntimeHandler:} Status:0xc420273c00 Container:0xc4200ecf00 NetNS:<nil> StopCh:0xc4203a8d60}"
Jan 02 17:13:13 node1 containerd[80463]: time="2019-01-02T17:13:13.091574305Z" level=debug msg="Loaded sandbox {Metadata:{ID:193604556bad15403c30c957a0df8956be0cf1560c834713e076c0e5032acb0e Name:kube-proxy-dnhz7_kube-system_57cb5193-03d2-11e9-9447-005056911476_0 Config:&PodSandboxConfig{Metadata:&PodSandboxMetadata{Name:kube-proxy-dnhz7,Uid:57cb5193-03d2-11e9-9447-005056911476,Namespace:kube-system,Attempt:0,},Hostname:,LogDirectory:/var/log/pods/57cb5193-03d2-11e9-9447-005056911476,DnsConfig:&DNSConfig{Servers:[10.0.22.45 1.1.1.1],Searches:[firescope.int],Options:[],},PortMappings:[],Labels:map[string]string{controller-revision-hash: 4270652902,io.kubernetes.pod.name: kube-proxy-dnhz7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 57cb5193-03d2-11e9-9447-005056911476,k8s-app: kube-proxy,pod-template-generation: 6,},Annotations:map[string]string{kubernetes.io/config.seen: 2018-12-19T21:22:46.189063942Z,kubernetes.io/config.source: api,scheduler.alpha.kubernetes.io/critical-pod: ,},Linux:&LinuxPodSandboxConfig{CgroupParent:/kubepods/besteffort/pod57cb5193-03d2-11e9-9447-005056911476,SecurityContext:&LinuxSandboxSecurityContext{NamespaceOptions:&NamespaceOption{Network:NODE,Pid:CONTAINER,Ipc:POD,},SelinuxOptions:nil,RunAsUser:nil,ReadonlyRootfs:false,SupplementalGroups:[],Privileged:true,SeccompProfilePath:,RunAsGroup:nil,},Sysctls:map[string]string{},},} NetNSPath: IP: RuntimeHandler:} Status:0xc420294cc0 Container:0xc4200ecf40 NetNS:<nil> StopCh:0xc4206a37c0}"
Jan 02 17:13:13 node1 containerd[80463]: time="2019-01-02T17:13:13.091984256Z" level=debug msg="Loaded sandbox {Metadata:{ID:22be9224ae90904810a18081a75145c511a1834a9b524332543f6bb093a38645 Name:coredns-78fcdf6894-mrsp7_kube-system_2381c786-03d8-11e9-97aa-005056911476_0 Config:&PodSandboxConfig{Metadata:&PodSandboxMetadata{Name:coredns-78fcdf6894-mrsp7,Uid:2381c786-03d8-11e9-97aa-005056911476,Namespace:kube-system,Attempt:0,},Hostname:coredns-78fcdf6894-mrsp7,LogDirectory:/var/log/pods/2381c786-03d8-11e9-97aa-005056911476,DnsConfig:&DNSConfig{Servers:[10.0.22.45 1.1.1.1],Searches:[firescope.int],Options:[],},PortMappings:[&PortMapping{Protocol:UDP,ContainerPort:53,HostPort:0,HostIp:,} &PortMapping{Protocol:TCP,ContainerPort:53,HostPort:0,HostIp:,} &PortMapping{Protocol:TCP,ContainerPort:9153,HostPort:0,HostIp:,}],Labels:map[string]string{io.kubernetes.pod.name: coredns-78fcdf6894-mrsp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2381c786-03d8-11e9-97aa-005056911476,k8s-app: kube-dns,pod-template-hash: 3497892450,},Annotations:map[string]string{kubernetes.io/config.seen: 2018-12-19T21:50:51.791796041Z,kubernetes.io/config.source: api,},Linux:&LinuxPodSandboxConfig{CgroupParent:/kubepods/burstable/pod2381c786-03d8-11e9-97aa-005056911476,SecurityContext:&LinuxSandboxSecurityContext{NamespaceOptions:&NamespaceOption{Network:POD,Pid:CONTAINER,Ipc:POD,},SelinuxOptions:nil,RunAsUser:nil,ReadonlyRootfs:false,SupplementalGroups:[],Privileged:false,SeccompProfilePath:,RunAsGroup:nil,},Sysctls:map[string]string{},},} NetNSPath:/var/run/netns/cni-ff2329e6-ee16-20a9-36bb-7617670f7e47 IP:10.214.128.63 RuntimeHandler:} Status:0xc4202954c0 Container:0xc4200ecf60 NetNS:0xc42072d8c0 StopCh:0xc4206b2d00}"
Jan 02 17:13:13 node1 containerd[80463]: time="2019-01-02T17:13:13.092333938Z" level=debug msg="Loaded sandbox {Metadata:{ID:2428da7afb7fe092edb0a924c2a83b0aa1c37b71a0b572f47e064757e8f0e7c9 Name:kube-scheduler-node1_kube-system_705e7ce1217a37349a5567101e60165d_2 Config:&PodSandboxConfig{Metadata:&PodSandboxMetadata{Name:kube-scheduler-node1,Uid:705e7ce1217a37349a5567101e60165d,Namespace:kube-system,Attempt:2,},Hostname:,LogDirectory:/var/log/pods/705e7ce1217a37349a5567101e60165d,DnsConfig:&DNSConfig{Servers:[10.0.22.45 1.1.1.1],Searches:[firescope.int],Options:[],},PortMappings:[],Labels:map[string]string{component: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-node1,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 705e7ce1217a37349a5567101e60165d,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 705e7ce1217a37349a5567101e60165d,kubernetes.io/config.seen: 2018-12-19T21:18:55.521061971Z,kubernetes.io/config.source: file,scheduler.alpha.kubernetes.io/critical-pod: ,},Linux:&LinuxPodSandboxConfig{CgroupParent:/kubepods/burstable/pod705e7ce1217a37349a5567101e60165d,SecurityContext:&LinuxSandboxSecurityContext{NamespaceOptions:&NamespaceOption{Network:NODE,Pid:CONTAINER,Ipc:POD,},SelinuxOptions:nil,RunAsUser:nil,ReadonlyRootfs:false,SupplementalGroups:[],Privileged:false,SeccompProfilePath:,RunAsGroup:nil,},Sysctls:map[string]string{},},} NetNSPath: IP: RuntimeHandler:} Status:0xc420295cc0 Container:0xc4200ecf80 NetNS:<nil> StopCh:0xc4206d61c0}"
Jan 02 17:13:13 node1 containerd[80463]: time="2019-01-02T17:13:13.092422629Z" level=fatal msg="Failed to run CRI service" error="failed to recover state: failed to reserve sandbox name "kube-scheduler-node1_kube-system_705e7ce1217a37349a5567101e60165d_2": name "kube-scheduler-node1_kube-system_705e7ce1217a37349a5567101e60165d_2" is reserved for "139bb0ac7e050e9e28b994e78f651a8609f426f1b5bbfc887a0d4a3350b4eee2""
containerd 1.2.1
Kubernetes 1.11.6
Ubuntu 18.04
# cat /etc/containerd/config.toml
[debug]
  level = "debug"

Most likely the abrupt shutdown caused the containerd database and filesystem to become out of sync, but I would hope containerd could be more aggressive in recovering from such an error. The container is stateless and should just be forcibly removed and re-added.

At minimum, is there any workaround to recover from the above? I've already tried deleting /var/lib/containerd/io.containerd.runtime.v1.linux/k8s.io/139bb0ac7e050e9e28b994e78f651a8609f426f1b5bbfc887a0d4a3350b4eee2 and it didn't change anything.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions