Skip to content
This repository was archived by the owner on May 12, 2021. It is now read-only.
This repository was archived by the owner on May 12, 2021. It is now read-only.

Consul in Kata Containers doesn't have network connectivity (K8s) #613

@raravena80

Description

@raravena80

Description of problem

I'm having issues running a 3 node Consul cluster in Kata Containers (on aKubernetes Cluster) This runs successfully with a runC runtime. Consul needs a lot of ports open and it also uses communicates between nodes using a gossip protocol (Also raft to coordinate its quorum)

Following this: https://github.com/raravena80/kubeconsulvaultoss

More specifically these Kubernetes Consul services:

https://github.com/raravena80/kubeconsulvaultoss/tree/master/consul-kata-deployments

Expected result

Something like this in logs of each node

WARNING: LAN keyring exists but -encrypt given, using keyring
WARNING: WAN keyring exists but -encrypt given, using keyring
bootstrap_expect > 0: expecting 3 servers
==> Starting Consul agent...
==> Consul agent running!
           Version: 'v1.2.2'
           Node ID: 'bef2c086-15c6-ec99-36a2-79a258adb9c5'
         Node name: 'consul-1'
        Datacenter: 'us-central1-c' (Segment: '<all>')
            Server: true (Bootstrap: false)
       Client Addr: [0.0.0.0] (HTTP: 8500, HTTPS: -1, DNS: 8600)
      Cluster Addr: 10.106.237.83 (LAN: 8301, WAN: 8302)
           Encrypt: Gossip: true, TLS-Outgoing: false, TLS-Incoming: false

==> Log data will now stream in as it occurs:

    2018/08/21 03:20:15 [INFO] raft: Initial configuration (index=1): [{Suffrage:Voter ID:bef2c086-15c6-ec99-36a2-79a258adb9c5 Address:10.106.237.83:8300} {Suffrage:Voter ID:31f9fc85-72ac-d5ca-b193-b559a2c322c2 Address:10.100.224.4:8300} {Suffrage:Voter ID:b570f248-ab94-de7d-a5c3-295cca24e6c6 Address:10.110.8.50:8300}]
    2018/08/21 03:20:15 [INFO] raft: Node at 10.106.237.83:8300 [Follower] entering Follower state (Leader: "")
    2018/08/21 03:20:15 [INFO] serf: EventMemberJoin: consul-1.us-central1-c 10.106.237.83
    2018/08/21 03:20:15 [INFO] serf: EventMemberJoin: consul-1 10.106.237.83
    2018/08/21 03:20:15 [INFO] agent: Started DNS server 0.0.0.0:8600 (udp)
    2018/08/21 03:20:15 [INFO] serf: Attempting re-join to previously known node: consul-3.us-central1-c: 10.110.8.50:8302
    2018/08/21 03:20:15 [INFO] serf: Attempting re-join to previously known node: vault-1: 192.168.4.28:8301
    2018/08/21 03:20:15 [INFO] consul: Adding LAN server consul-1 (Addr: tcp/10.106.237.83:8300) (DC: us-central1-c)
    2018/08/21 03:20:15 [INFO] consul: Raft data found, disabling bootstrap mode
    2018/08/21 03:20:15 [INFO] consul: Handled member-join event for server "consul-1.us-central1-c" in area "wan"
    2018/08/21 03:20:15 [INFO] agent: Started DNS server 0.0.0.0:8600 (tcp)
    2018/08/21 03:20:15 [INFO] agent: Started HTTP server on [::]:8500 (tcp)
    2018/08/21 03:20:15 [INFO] agent: started state syncer
    2018/08/21 03:20:15 [INFO] serf: EventMemberJoin: consul-2.us-central1-c 10.100.224.4
    2018/08/21 03:20:15 [INFO] serf: EventMemberJoin: consul-3.us-central1-c 10.110.8.50
    2018/08/21 03:20:15 [INFO] serf: Re-joined to previously known node: consul-3.us-central1-c: 10.110.8.50:8302
    2018/08/21 03:20:15 [INFO] consul: Handled member-join event for server "consul-2.us-central1-c" in area "wan"
    2018/08/21 03:20:15 [INFO] consul: Handled member-join event for server "consul-3.us-central1-c" in area "wan"
2018/08/21 03:20:16 [DEBUG] raft-net: 10.106.237.83:8300 accepted connection from: 192.168.3.24:47790
2018/08/21 03:20:16 [DEBUG] raft-net: 10.106.237.83:8300 accepted connection from: 192.168.3.24:47792
    2018/08/21 03:20:16 [WARN] raft: Failed to get previous log: 3885 log not found (last: 3884)
    2018/08/21 03:20:22 [ERR] agent: failed to sync remote state: No cluster leader
    2018/08/21 03:20:25 [INFO] serf: Attempting re-join to previously known node: vault-2: 192.168.1.33:8301
    2018/08/21 03:20:27 [INFO] serf: EventMemberJoin: consul-3 10.110.8.50
    2018/08/21 03:20:27 [INFO] serf: EventMemberJoin: consul-2 10.100.224.4
    2018/08/21 03:20:27 [INFO] consul: Adding LAN server consul-3 (Addr: tcp/10.110.8.50:8300) (DC: us-central1-c)
    2018/08/21 03:20:27 [INFO] consul: Adding LAN server consul-2 (Addr: tcp/10.100.224.4:8300) (DC: us-central1-c)
    2018/08/21 03:20:27 [INFO] consul: New leader elected: consul-3
    2018/08/21 03:20:29 [INFO] agent: Synced node info
    2018/08/21 03:20:35 [INFO] serf: Attempting re-join to previously known node: consul-3: 10.110.8.50:8301
    2018/08/21 03:20:35 [INFO] serf: Re-joined to previously known node: consul-3: 10.110.8.50:8301

Actual result

No logs an eventually errors out.

WARNING: LAN keyring exists but -encrypt given, using keyring
WARNING: WAN keyring exists but -encrypt given, using keyring
bootstrap_expect > 0: expecting 3 servers
==> Starting Consul agent...
==> Error starting agent: Failed to start Consul server: Failed to start Raft: invalid argument

This is what the qemu command line looks like:

root     21206 21184  2 03:39 ?        00:00:02 /opt/kata/bin/qemu-system-x86_64 -name sandbox-5d42fd790048f4eb5a56c4d64019e4fe9c07525f6a6db6bf7423ea3a9aa09821 -uuid 34e884aa-2ae7-4fe7-b8f9-d2de77a74b35 -machine pc,accel=kvm,kernel_irqchip,nvdimm -cpu host,pmu=off -qmp unix:/run/vc/vm/5d42fd790048f4eb5a56c4d64019e4fe9c07525f6a6db6bf7423ea3a9aa09821/qmp.sock,server,nowait -m 2048M,slots=2,maxmem=4717M -device pci-bridge,bus=pci.0,id=pci-bridge-0,chassis_nr=1,shpc=on,addr=2 -device virtio-serial-pci,disable-modern=true,id=serial0 -device virtconsole,chardev=charconsole0,id=console0 -chardev socket,id=charconsole0,path=/run/vc/vm/5d42fd790048f4eb5a56c4d64019e4fe9c07525f6a6db6bf7423ea3a9aa09821/console.sock,server,nowait -device nvdimm,id=nv0,memdev=mem0 -object memory-backend-file,id=mem0,mem-path=/opt/kata/share/kata-containers/kata-containers-image_clearlinux_1.2.0_agent_fcfa054a757.img,size=536870912 -device virtio-scsi-pci,id=scsi0,disable-modern=true -device virtserialport,chardev=charch0,id=channel0,name=agent.channel.0 -chardev socket,id=charch0,path=/run/vc/vm/5d42fd790048f4eb5a56c4d64019e4fe9c07525f6a6db6bf7423ea3a9aa09821/kata.sock,server,nowait -device virtio-9p-pci,disable-modern=true,fsdev=extra-9p-kataShared,mount_tag=kataShared -fsdev local,id=extra-9p-kataShared,path=/run/kata-containers/shared/sandboxes/5d42fd790048f4eb5a56c4d64019e4fe9c07525f6a6db6bf7423ea3a9aa09821,security_model=none -netdev tap,id=network-0,vhost=on,vhostfds=3:4:5:6:7:8:9:10,fds=11:12:13:14:15:16:17:18 -device driver=virtio-net-pci,netdev=network-0,mac=9e:38:c1:ae:f1:f2,disable-modern=true,mq=on,vectors=18 -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=discard -vga none -no-user-config -nodefaults -nographic -daemonize -kernel /opt/kata/share/kata-containers/vmlinuz-4.14.51-7 -append tsc=reliable no_timer_check rcupdate.rcu_expedited=1 i8042.direct=1 i8042.dumbkbd=1 i8042.nopnp=1 i8042.noaux=1 noreplace-smp reboot=k console=hvc0 console=hvc1 iommu=off cryptomgr.notests net.ifnames=0 pci=lastbus=0 root=/dev/pmem0p1 rootflags=dax,data=ordered,errors=remount-ro rw rootfstype=ext4 quiet systemd.show_status=false panic=1 nr_cpus=1 init=/usr/lib/systemd/systemd systemd.unit=kata-containers.target systemd.mask=systemd-networkd.service systemd.mask=systemd-networkd.socket -smp 1,cores=1,threads=1,sockets=1,maxcpus=1

Meta details

Running kata-collect-data.sh version 1.2.0 (commit 0bcb32f7042625ea0ecee8f37b99b4459ebd5af8) at 2018-08-21.03:43:36.41314387 5+0000.


Runtime is /opt/kata/bin/kata-runtime.

kata-env

Output of "/opt/kata/bin/kata-runtime kata-env":

[Meta]
  Version = "1.0.13"

[Runtime]
  Debug = false
  [Runtime.Version]
    Semver = "1.2.0"
    Commit = "0bcb32f7042625ea0ecee8f37b99b4459ebd5af8"
    OCI = "1.0.1"
  [Runtime.Config]
    Path = "/opt/kata/share/defaults/kata-containers/configuration.toml"

[Hypervisor]
  MachineType = "pc"
  Version = "QEMU emulator version 2.11.2(kata-static)\nCopyright (c) 2003-2017 Fabrice Bellard and the QEMU Project developers"
  Path = "/opt/kata/bin/qemu-system-x86_64"
  BlockDeviceDriver = "virtio-scsi"
  Msize9p = 8192
  Debug = false
  UseVSock = false

[Image]
  Path = "/opt/kata/share/kata-containers/kata-containers-image_clearlinux_1.2.0_agent_fcfa054a757.img"

[Kernel]
  Path = "/opt/kata/share/kata-containers/vmlinuz-4.14.51-7"
  Parameters = ""

[Initrd]
  Path = ""

[Proxy]
  Type = "kataProxy"
  Version = "kata-proxy version 1.2.0-17962180fc61c66066905546bfd5d8933bf73df1"
  Path = "/opt/kata/libexec/kata-containers/kata-proxy"
  Debug = false

[Shim]
  Type = "kataShim"
  Version = "kata-shim version 1.2.0-0a37760c0224167143cb3cc920c78f5147f52e70"
  Path = "/opt/kata/libexec/kata-containers/kata-shim"
  Debug = false

[Agent]
  Type = "kata"

[Host]
  Kernel = "4.15.0-1017-gcp"
  Architecture = "amd64"
  VMContainerCapable = true
  [Host.Distro]
    Name = "Ubuntu"
    Version = "18.04"
  [Host.CPU]
    Vendor = "GenuineIntel"
    Model = "Intel(R) Xeon(R) CPU @ 2.30GHz"

Runtime config files

Runtime default config files

/etc/kata-containers/configuration.toml
/opt/kata/share/defaults/kata-containers/configuration.toml

Runtime config file contents

Config file /etc/kata-containers/configuration.toml not found
Output of "cat "/opt/kata/share/defaults/kata-containers/configuration.toml"":

# Copyright (c) 2017-2018 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#

# XXX: WARNING: this file is auto-generated.
# XXX:
# XXX: Source file: "cli/config/configuration.toml.in"
# XXX: Project:
# XXX:   Name: Kata Containers
# XXX:   Type: kata

[hypervisor.qemu]
path = "/opt/kata/bin/qemu-system-x86_64"
kernel = "/opt/kata/share/kata-containers/vmlinuz.container"
image = "/opt/kata/share/kata-containers/kata-containers.img"
machine_type = "pc"

# Optional space-separated list of options to pass to the guest kernel.
# For example, use `kernel_params = "vsyscall=emulate"` if you are having
# trouble running pre-2.15 glibc.
#
# WARNING: - any parameter specified here will take priority over the default
# parameter value of the same name used to start the virtual machine.
# Do not set values here unless you understand the impact of doing so as you
# may stop the virtual machine from booting.
# To see the list of default parameters, enable hypervisor debug, create a
# container and look for 'default-kernel-parameters' log entries.
kernel_params = ""

# Path to the firmware.
# If you want that qemu uses the default firmware leave this option empty
firmware = ""

# Machine accelerators
# comma-separated list of machine accelerators to pass to the hypervisor.
# For example, `machine_accelerators = "nosmm,nosmbus,nosata,nopit,static-prt,nofw"`
machine_accelerators=""

# Default number of vCPUs per SB/VM:
# unspecified or 0                --> will be set to 1
# < 0                             --> will be set to the actual number of physical cores
# > 0 <= number of physical cores --> will be set to the specified number
# > number of physical cores      --> will be set to the actual number of physical cores
default_vcpus = 1

# Default maximum number of vCPUs per SB/VM:
# unspecified or == 0             --> will be set to the actual number of physical cores or to the maximum number
#                                     of vCPUs supported by KVM if that number is exceeded
# > 0 <= number of physical cores --> will be set to the specified number
# > number of physical cores      --> will be set to the actual number of physical cores or to the maximum number
#                                     of vCPUs supported by KVM if that number is exceeded
# WARNING: Depending of the architecture, the maximum number of vCPUs supported by KVM is used when
# the actual number of physical cores is greater than it.
# WARNING: Be aware that this value impacts the virtual machine's memory footprint and CPU
# the hotplug functionality. For example, `default_maxvcpus = 240` specifies that until 240 vCPUs
# can be added to a SB/VM, but the memory footprint will be big. Another example, with
# `default_maxvcpus = 8` the memory footprint will be small, but 8 will be the maximum number of
# vCPUs supported by the SB/VM. In general, we recommend that you do not edit this variable,
# unless you know what are you doing.
default_maxvcpus = 0

# Bridges can be used to hot plug devices.
# Limitations:
# * Currently only pci bridges are supported
# * Until 30 devices per bridge can be hot plugged.
# * Until 5 PCI bridges can be cold plugged per VM.
#   This limitation could be a bug in qemu or in the kernel
# Default number of bridges per SB/VM:
# unspecified or 0   --> will be set to 1
# > 1 <= 5           --> will be set to the specified number
# > 5                --> will be set to 5
default_bridges = 1

# Default memory size in MiB for SB/VM.
# If unspecified then it will be set 2048 MiB.
#default_memory = 2048

# Disable block device from being used for a container's rootfs.
# In case of a storage driver like devicemapper where a container's
# root file system is backed by a block device, the block device is passed
# directly to the hypervisor for performance reasons.
# This flag prevents the block device from being passed to the hypervisor,
# 9pfs is used instead to pass the rootfs.
disable_block_device_use = false

# Block storage driver to be used for the hypervisor in case the container
# rootfs is backed by a block device. This is either virtio-scsi or
# virtio-blk.
block_device_driver = "virtio-scsi"

# Enable iothreads (data-plane) to be used. This causes IO to be
# handled in a separate IO thread. This is currently only implemented
# for SCSI.
#
enable_iothreads = false

# Enable pre allocation of VM RAM, default false
# Enabling this will result in lower container density
# as all of the memory will be allocated and locked
# This is useful when you want to reserve all the memory
# upfront or in the cases where you want memory latencies
# to be very predictable
# Default false
#enable_mem_prealloc = true

# Enable huge pages for VM RAM, default false
# Enabling this will result in the VM memory
# being allocated using huge pages.
# This is useful when you want to use vhost-user network
# stacks within the container. This will automatically
# result in memory pre allocation
#enable_hugepages = true

# Enable swap of vm memory. Default false.
# The behaviour is undefined if mem_prealloc is also set to true
#enable_swap = true

# This option changes the default hypervisor and kernel parameters
# to enable debug output where available. This extra output is added
# to the proxy logs, but only when proxy debug is also enabled.
#
# Default false
#enable_debug = true

# Disable the customizations done in the runtime when it detects
# that it is running on top a VMM. This will result in the runtime
# behaving as it would when running on bare metal.
#
#disable_nesting_checks = true

# This is the msize used for 9p shares. It is the number of bytes
# used for 9p packet payload.
#msize_9p = 8192

# If true and vsocks are supported, use vsocks to communicate directly
# with the agent and no proxy is started, otherwise use unix
# sockets and start a proxy to communicate with the agent.
# Default false
#use_vsock = true

[factory]
# VM templating support. Once enabled, new VMs are created from template
# using vm cloning. They will share the same initial kernel, initramfs and
# agent memory by mapping it readonly. It helps speeding up new container
# creation and saves a lot of memory if there are many kata containers running
# on the same host.
#
# When disabled, new VMs are created from scratch.
#
# Default false
#enable_template = true

[proxy.kata]
path = "/opt/kata/libexec/kata-containers/kata-proxy"

# If enabled, proxy messages will be sent to the system log
# (default: disabled)
#enable_debug = true

[shim.kata]
path = "/opt/kata/libexec/kata-containers/kata-shim"

# If enabled, shim messages will be sent to the system log
# (default: disabled)
#enable_debug = true

[agent.kata]
# There is no field for this section. The goal is only to be able to
# specify which type of agent the user wants to use.

[runtime]
# If enabled, the runtime will log additional debug messages to the
# system log
# (default: disabled)
#enable_debug = true
#
# Internetworking model
# Determines how the VM should be connected to the
# the container network interface
# Options:
#
#   - bridged
#     Uses a linux bridge to interconnect the container interface to
#     the VM. Works for most cases except macvlan and ipvlan.
#
#   - macvtap
#     Used when the Container network interface can be bridged using
#     macvtap.
internetworking_model="macvtap"

Output of "cat "/usr/share/defaults/kata-containers/configuration.toml"":

# Copyright (c) 2017-2018 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#

# XXX: WARNING: this file is auto-generated.
# XXX:
# XXX: Source file: "cli/config/configuration.toml.in"
# XXX: Project:
# XXX:   Name: Kata Containers
# XXX:   Type: kata

[hypervisor.qemu]
path = "/opt/kata/bin/qemu-system-x86_64"
kernel = "/opt/kata/share/kata-containers/vmlinuz.container"
image = "/opt/kata/share/kata-containers/kata-containers.img"
machine_type = "pc"

# Optional space-separated list of options to pass to the guest kernel.
# For example, use `kernel_params = "vsyscall=emulate"` if you are having
# trouble running pre-2.15 glibc.
#
# WARNING: - any parameter specified here will take priority over the default
# parameter value of the same name used to start the virtual machine.
# Do not set values here unless you understand the impact of doing so as you
# may stop the virtual machine from booting.
# To see the list of default parameters, enable hypervisor debug, create a
# container and look for 'default-kernel-parameters' log entries.
kernel_params = ""

# Path to the firmware.
# If you want that qemu uses the default firmware leave this option empty
firmware = ""

# Machine accelerators
# comma-separated list of machine accelerators to pass to the hypervisor.
# For example, `machine_accelerators = "nosmm,nosmbus,nosata,nopit,static-prt,nofw"`
machine_accelerators=""

# Default number of vCPUs per SB/VM:
# unspecified or 0                --> will be set to 1
# < 0                             --> will be set to the actual number of physical cores
# > 0 <= number of physical cores --> will be set to the specified number
# > number of physical cores      --> will be set to the actual number of physical cores
default_vcpus = 1

# Default maximum number of vCPUs per SB/VM:
# unspecified or == 0             --> will be set to the actual number of physical cores or to the maximum number
#                                     of vCPUs supported by KVM if that number is exceeded
# > 0 <= number of physical cores --> will be set to the specified number
# > number of physical cores      --> will be set to the actual number of physical cores or to the maximum number
#                                     of vCPUs supported by KVM if that number is exceeded
# WARNING: Depending of the architecture, the maximum number of vCPUs supported by KVM is used when
# the actual number of physical cores is greater than it.
# WARNING: Be aware that this value impacts the virtual machine's memory footprint and CPU
# the hotplug functionality. For example, `default_maxvcpus = 240` specifies that until 240 vCPUs
# can be added to a SB/VM, but the memory footprint will be big. Another example, with
# `default_maxvcpus = 8` the memory footprint will be small, but 8 will be the maximum number of
# vCPUs supported by the SB/VM. In general, we recommend that you do not edit this variable,
# unless you know what are you doing.
default_maxvcpus = 0

# Bridges can be used to hot plug devices.
# Limitations:
# * Currently only pci bridges are supported
# * Until 30 devices per bridge can be hot plugged.
# * Until 5 PCI bridges can be cold plugged per VM.
#   This limitation could be a bug in qemu or in the kernel
# Default number of bridges per SB/VM:
# unspecified or 0   --> will be set to 1
# > 1 <= 5           --> will be set to the specified number
# > 5                --> will be set to 5
default_bridges = 1

# Default memory size in MiB for SB/VM.
# If unspecified then it will be set 2048 MiB.
#default_memory = 2048

# Disable block device from being used for a container's rootfs.
# In case of a storage driver like devicemapper where a container's
# root file system is backed by a block device, the block device is passed
# directly to the hypervisor for performance reasons.
# This flag prevents the block device from being passed to the hypervisor,
# 9pfs is used instead to pass the rootfs.
disable_block_device_use = false

# Block storage driver to be used for the hypervisor in case the container
# rootfs is backed by a block device. This is either virtio-scsi or
# virtio-blk.
block_device_driver = "virtio-scsi"

# Enable iothreads (data-plane) to be used. This causes IO to be
# handled in a separate IO thread. This is currently only implemented
# for SCSI.
#
enable_iothreads = false

# Enable pre allocation of VM RAM, default false
# Enabling this will result in lower container density
# as all of the memory will be allocated and locked
# This is useful when you want to reserve all the memory
# upfront or in the cases where you want memory latencies
# to be very predictable
# Default false
#enable_mem_prealloc = true

# Enable huge pages for VM RAM, default false
# Enabling this will result in the VM memory
# being allocated using huge pages.
# This is useful when you want to use vhost-user network
# stacks within the container. This will automatically
# result in memory pre allocation
#enable_hugepages = true

# Enable swap of vm memory. Default false.
# The behaviour is undefined if mem_prealloc is also set to true
#enable_swap = true

# This option changes the default hypervisor and kernel parameters
# to enable debug output where available. This extra output is added
# to the proxy logs, but only when proxy debug is also enabled.
#
# Default false
#enable_debug = true

# Disable the customizations done in the runtime when it detects
# that it is running on top a VMM. This will result in the runtime
# behaving as it would when running on bare metal.
#
#disable_nesting_checks = true

# This is the msize used for 9p shares. It is the number of bytes
# used for 9p packet payload.
#msize_9p = 8192

# If true and vsocks are supported, use vsocks to communicate directly
# with the agent and no proxy is started, otherwise use unix
# sockets and start a proxy to communicate with the agent.
# Default false
#use_vsock = true

[factory]
# VM templating support. Once enabled, new VMs are created from template
# using vm cloning. They will share the same initial kernel, initramfs and
# agent memory by mapping it readonly. It helps speeding up new container
# creation and saves a lot of memory if there are many kata containers running
# on the same host.
#
# When disabled, new VMs are created from scratch.
#
# Default false
#enable_template = true

[proxy.kata]
path = "/opt/kata/libexec/kata-containers/kata-proxy"

# If enabled, proxy messages will be sent to the system log
# (default: disabled)
#enable_debug = true

[shim.kata]
path = "/opt/kata/libexec/kata-containers/kata-shim"

# If enabled, shim messages will be sent to the system log
# (default: disabled)
#enable_debug = true

[agent.kata]
# There is no field for this section. The goal is only to be able to
# specify which type of agent the user wants to use.

[runtime]
# If enabled, the runtime will log additional debug messages to the
# system log
# (default: disabled)
#enable_debug = true
#
# Internetworking model
# Determines how the VM should be connected to the
# the container network interface
# Options:
#
#   - bridged
#     Uses a linux bridge to interconnect the container interface to
#     the VM. Works for most cases except macvlan and ipvlan.
#
#   - macvtap
#     Used when the Container network interface can be bridged using
#     macvtap.
internetworking_model="macvtap"

Image details

---
osbuilder:
  url: "https://github.com/kata-containers/osbuilder"
  version: "1.2.0-ae14163ca2fad2ad41154cda31554a3075e8f5b8"
rootfs-creation-time: "2018-08-10T14:15:04.923100607-0500Z"
description: "osbuilder rootfs"
file-format-version: "0.0.2"
architecture: "x86_64"
base-distro:
  name: "Clear"
  version: "24330"
  packages:
    default:
      - "iptables-bin"
      - "libudev0-shim"
      - "systemd"
    extra:

agent:
  url: "https://github.com/kata-containers/agent"
  name: "kata-agent"
  version: "1.2.0-fcfa054a757e7c17afba47b0b4d7e91cbb8688ed"
  agent-is-init-daemon: "no"

Initrd details

No initrd


Logfiles

Runtime logs

Recent runtime problems found in system journal:

time="2018-08-21T02:35:56.766752532Z" level=error msg="Container 129438794d5d80717bcc252b50b9c1213780f53beae8bea4895b8a3548bb0684
 not ready, running or paused, cannot send a signal" arch=amd64 command=kill container=129438794d5d80717bcc252b50b9c1213780f53bea
e8bea4895b8a3548bb0684 name=kata-runtime pid=5536 sandbox=4d2cfd654b585a03dda3b72f1ca1e845fc8fef4b60808294eb285d45bc9920d5 source
=runtime
time="2018-08-21T02:36:20.767691641Z" level=warning msg="Cannot hotplug 1 CPUs, currently this SB has 1 CPUs and the maximum amount of CPUs is 1" arch=amd64 command=create container=e493522907935dcb1c38b76bfd54b0b3e537a60bd75a85ded6185c3391c07acc name=kata-runtime pid=5738 sandbox=4d2cfd654b585a03dda3b72f1ca1e845fc8fef4b60808294eb285d45bc9920d5 source=virtcontainers subsystem=qemu
time="2018-08-21T02:36:20.767855941Z" level=warning msg="maximum number of vCPUs '1' has been reached" arch=amd64 command=create
container=e493522907935dcb1c38b76bfd54b0b3e537a60bd75a85ded6185c3391c07acc name=kata-runtime pid=5738 sandbox=4d2cfd654b585a03dda
3b72f1ca1e845fc8fef4b60808294eb285d45bc9920d5 source=virtcontainers subsystem=qemu
time="2018-08-21T02:36:24.522043723Z" level=error msg="Container not ready, running or paused, impossible to signal the container" arch=amd64 command=kill container=e493522907935dcb1c38b76bfd54b0b3e537a60bd75a85ded6185c3391c07acc name=kata-runtime pid=5848 sandbox=4d2cfd654b585a03dda3b72f1ca1e845fc8fef4b60808294eb285d45bc9920d5 source=runtime
time="2018-08-21T02:36:24.603195807Z" level=error msg="Container e493522907935dcb1c38b76bfd54b0b3e537a60bd75a85ded6185c3391c07acc
 not ready, running or paused, cannot send a signal" arch=amd64 command=kill container=e493522907935dcb1c38b76bfd54b0b3e537a60bd7
5a85ded6185c3391c07acc name=kata-runtime pid=5884 sandbox=4d2cfd654b585a03dda3b72f1ca1e845fc8fef4b60808294eb285d45bc9920d5 source
=runtime
time="2018-08-21T02:37:06.767461386Z" level=warning msg="Cannot hotplug 1 CPUs, currently this SB has 1 CPUs and the maximum amount of CPUs is 1" arch=amd64 command=create container=4dafaa4374d90eb952a18e818e71e481d6966bab040612487d9443f046c21820 name=kata-runtime pid=6141 sandbox=4d2cfd654b585a03dda3b72f1ca1e845fc8fef4b60808294eb285d45bc9920d5 source=virtcontainers subsystem=qemu
time="2018-08-21T02:37:06.767619405Z" level=warning msg="maximum number of vCPUs '1' has been reached" arch=amd64 command=create
container=4dafaa4374d90eb952a18e818e71e481d6966bab040612487d9443f046c21820 name=kata-runtime pid=6141 sandbox=4d2cfd654b585a03dda
3b72f1ca1e845fc8fef4b60808294eb285d45bc9920d5 source=virtcontainers subsystem=qemu
time="2018-08-21T02:37:10.49126528Z" level=error msg="Container not ready, running or paused, impossible to signal the container"
 arch=amd64 command=kill container=4dafaa4374d90eb952a18e818e71e481d6966bab040612487d9443f046c21820 name=kata-runtime pid=6264 sa
ndbox=4d2cfd654b585a03dda3b72f1ca1e845fc8fef4b60808294eb285d45bc9920d5 source=runtime
time="2018-08-21T02:37:10.576092752Z" level=error msg="Container 4dafaa4374d90eb952a18e818e71e481d6966bab040612487d9443f046c21820
 not ready, running or paused, cannot send a signal" arch=amd64 command=kill container=4dafaa4374d90eb952a18e818e71e481d6966bab04
0612487d9443f046c21820 name=kata-runtime pid=6300 sandbox=4d2cfd654b585a03dda3b72f1ca1e845fc8fef4b60808294eb285d45bc9920d5 source
=runtime
time="2018-08-21T02:38:30.760734078Z" level=warning msg="Cannot hotplug 1 CPUs, currently this SB has 1 CPUs and the maximum amount of CPUs is 1" arch=amd64 command=create container=ac2e77e5686c0e3abffa4c0ddfe7e462d01b061191b0a7195528c086967ca46b name=kata-runtime pid=6688 sandbox=4d2cfd654b585a03dda3b72f1ca1e845fc8fef4b60808294eb285d45bc9920d5 source=virtcontainers subsystem=qemu
time="2018-08-21T02:38:30.760869465Z" level=warning msg="maximum number of vCPUs '1' has been reached" arch=amd64 command=create
container=ac2e77e5686c0e3abffa4c0ddfe7e462d01b061191b0a7195528c086967ca46b name=kata-runtime pid=6688 sandbox=4d2cfd654b585a03dda
3b72f1ca1e845fc8fef4b60808294eb285d45bc9920d5 source=virtcontainers subsystem=qemu
time="2018-08-21T02:38:34.415085289Z" level=error msg="Container not ready, running or paused, impossible to signal the container" arch=amd64 command=kill container=ac2e77e5686c0e3abffa4c0ddfe7e462d01b061191b0a7195528c086967ca46b name=kata-runtime pid=6796 s
andbox=4d2cfd654b585a03dda3b72f1ca1e845fc8fef4b60808294eb285d45bc9920d5 source=runtime
time="2018-08-21T02:38:34.501100626Z" level=error msg="Container ac2e77e5686c0e3abffa4c0ddfe7e462d01b061191b0a7195528c086967ca46b
 not ready, running or paused, cannot send a signal" arch=amd64 command=kill container=ac2e77e5686c0e3abffa4c0ddfe7e462d01b061191
b0a7195528c086967ca46b name=kata-runtime pid=6831 sandbox=4d2cfd654b585a03dda3b72f1ca1e845fc8fef4b60808294eb285d45bc9920d5 source
=runtime
time="2018-08-21T02:38:52.69800415Z" level=error msg="Container 4d2cfd654b585a03dda3b72f1ca1e845fc8fef4b60808294eb285d45bc9920d5
not ready, running or paused, cannot send a signal" arch=amd64 command=kill container=4d2cfd654b585a03dda3b72f1ca1e845fc8fef4b608
08294eb285d45bc9920d5 name=kata-runtime pid=7049 sandbox=4d2cfd654b585a03dda3b72f1ca1e845fc8fef4b60808294eb285d45bc9920d5 source=
runtime
time="2018-08-21T02:41:44.556471843Z" level=warning msg="fetch sandbox device failed" arch=amd64 command=create container=dfef4f0
c36707c44f343d54d00d298f6eb84273e0a721f90fb05aeffbb1cbb26 error="open /run/vc/sbs/dfef4f0c36707c44f343d54d00d298f6eb84273e0a721f9
0fb05aeffbb1cbb26/devices.json: no such file or directory" name=kata-runtime pid=7765 sandbox=dfef4f0c36707c44f343d54d00d298f6eb84273e0a721f90fb05aeffbb1cbb26 sandboxid=dfef4f0c36707c44f343d54d00d298f6eb84273e0a721f90fb05aeffbb1cbb26 source=virtcontainers subsystem=sandbox
time="2018-08-21T02:41:46.560976622Z" level=warning msg="unsupported address" address="fe80::dcf8:daff:fef6:87c1/64" arch=amd64 command=create container=dfef4f0c36707c44f343d54d00d298f6eb84273e0a721f90fb05aeffbb1cbb26 name=kata-runtime pid=7765 source=virtcontainers subsystem=kata_agent unsupported-address-type=ipv6
time="2018-08-21T02:41:46.561976595Z" level=warning msg="unsupported route" arch=amd64 command=create container=dfef4f0c36707c44f
343d54d00d298f6eb84273e0a721f90fb05aeffbb1cbb26 destination="fe80::/64" name=kata-runtime pid=7765 source=virtcontainers subsyste
m=kata_agent unsupported-route-type=ipv6
time="2018-08-21T02:41:47.879899516Z" level=warning msg="Cannot hotplug 1 CPUs, currently this SB has 1 CPUs and the maximum amou
nt of CPUs is 1" arch=amd64 command=create container=c1d66efcd4fec3d1f2af1763c58fa8711b586a153b2536141d324e87c8724988 name=kata-r
untime pid=7884 sandbox=dfef4f0c36707c44f343d54d00d298f6eb84273e0a721f90fb05aeffbb1cbb26 source=virtcontainers subsystem=qemu
time="2018-08-21T02:41:47.880038067Z" level=warning msg="maximum number of vCPUs '1' has been reached" arch=amd64 command=create
container=c1d66efcd4fec3d1f2af1763c58fa8711b586a153b2536141d324e87c8724988 name=kata-runtime pid=7884 sandbox=dfef4f0c36707c44f34
3d54d00d298f6eb84273e0a721f90fb05aeffbb1cbb26 source=virtcontainers subsystem=qemu
time="2018-08-21T02:41:48.415689601Z" level=warning msg="Cannot hotplug 1 CPUs, currently this SB has 1 CPUs and the maximum amount of CPUs is 1" arch=amd64 command=create container=5378df8ea915db79513fb67875799c9868a239f200b58399d1bb02152172b5e5 name=kata-runtime pid=7971 sandbox=dfef4f0c36707c44f343d54d00d298f6eb84273e0a721f90fb05aeffbb1cbb26 source=virtcontainers subsystem=qemu
time="2018-08-21T02:41:48.415923166Z" level=warning msg="maximum number of vCPUs '1' has been reached" arch=amd64 command=create
container=5378df8ea915db79513fb67875799c9868a239f200b58399d1bb02152172b5e5 name=kata-runtime pid=7971 sandbox=dfef4f0c36707c44f34
3d54d00d298f6eb84273e0a721f90fb05aeffbb1cbb26 source=virtcontainers subsystem=qemu
time="2018-08-21T02:42:11.71650789Z" level=warning msg="Cannot hotplug 1 CPUs, currently this SB has 1 CPUs and the maximum amoun
t of CPUs is 1" arch=amd64 command=create container=eb0852cdbebd1f7ebe7a9cb53667e955a5f5f2aaf063ea199f25b338efa2f266 name=kata-ru
ntime pid=8109 sandbox=dfef4f0c36707c44f343d54d00d298f6eb84273e0a721f90fb05aeffbb1cbb26 source=virtcontainers subsystem=qemu
time="2018-08-21T02:42:11.716641482Z" level=warning msg="maximum number of vCPUs '1' has been reached" arch=amd64 command=create
container=eb0852cdbebd1f7ebe7a9cb53667e955a5f5f2aaf063ea199f25b338efa2f266 name=kata-runtime pid=8109 sandbox=dfef4f0c36707c44f34
3d54d00d298f6eb84273e0a721f90fb05aeffbb1cbb26 source=virtcontainers subsystem=qemu
time="2018-08-21T02:48:45.687631602Z" level=error msg="Container c1d66efcd4fec3d1f2af1763c58fa8711b586a153b2536141d324e87c8724988
 not ready, running or paused, cannot send a signal" arch=amd64 command=kill container=c1d66efcd4fec3d1f2af1763c58fa8711b586a153b2536141d324e87c8724988 name=kata-runtime pid=9735 sandbox=dfef4f0c36707c44f343d54d00d298f6eb84273e0a721f90fb05aeffbb1cbb26 source
=runtime
time="2018-08-21T02:48:45.687825289Z" level=error msg="Container eb0852cdbebd1f7ebe7a9cb53667e955a5f5f2aaf063ea199f25b338efa2f266
 not ready, running or paused, cannot send a signal" arch=amd64 command=kill container=eb0852cdbebd1f7ebe7a9cb53667e955a5f5f2aaf0
63ea199f25b338efa2f266 name=kata-runtime pid=9761 sandbox=dfef4f0c36707c44f343d54d00d298f6eb84273e0a721f90fb05aeffbb1cbb26 source
=runtime
time="2018-08-21T02:48:45.727940676Z" level=error msg="Container 5378df8ea915db79513fb67875799c9868a239f200b58399d1bb02152172b5e5
 not ready, running or paused, cannot send a signal" arch=amd64 command=kill container=5378df8ea915db79513fb67875799c9868a239f200b58399d1bb02152172b5e5 name=kata-runtime pid=9806 sandbox=dfef4f0c36707c44f343d54d00d298f6eb84273e0a721f90fb05aeffbb1cbb26 source
=runtime
time="2018-08-21T02:48:45.875656409Z" level=error msg="Container eb0852cdbebd1f7ebe7a9cb53667e955a5f5f2aaf063ea199f25b338efa2f266
 not ready, running or paused, cannot send a signal" arch=amd64 command=kill container=eb0852cdbebd1f7ebe7a9cb53667e955a5f5f2aaf063ea199f25b338efa2f266 name=kata-runtime pid=9857 sandbox=dfef4f0c36707c44f343d54d00d298f6eb84273e0a721f90fb05aeffbb1cbb26 source
=runtime
time="2018-08-21T02:48:46.006744252Z" level=error msg="Container c1d66efcd4fec3d1f2af1763c58fa8711b586a153b2536141d324e87c8724988
 not ready, running or paused, cannot send a signal" arch=amd64 command=kill container=c1d66efcd4fec3d1f2af1763c58fa8711b586a153b2536141d324e87c8724988 name=kata-runtime pid=9905 sandbox=dfef4f0c36707c44f343d54d00d298f6eb84273e0a721f90fb05aeffbb1cbb26 source
=runtime
time="2018-08-21T02:48:46.104770559Z" level=error msg="Container 5378df8ea915db79513fb67875799c9868a239f200b58399d1bb02152172b5e5
 not ready, running or paused, cannot send a signal" arch=amd64 command=kill container=5378df8ea915db79513fb67875799c9868a239f200
b58399d1bb02152172b5e5 name=kata-runtime pid=9952 sandbox=dfef4f0c36707c44f343d54d00d298f6eb84273e0a721f90fb05aeffbb1cbb26 source
=runtime
time="2018-08-21T02:48:48.448991895Z" level=error msg="Container not ready, running or paused, impossible to signal the container" arch=amd64 command=kill container=dfef4f0c36707c44f343d54d00d298f6eb84273e0a721f90fb05aeffbb1cbb26 name=kata-runtime pid=10080
sandbox=dfef4f0c36707c44f343d54d00d298f6eb84273e0a721f90fb05aeffbb1cbb26 source=runtime
time="2018-08-21T02:48:48.591075924Z" level=error msg="Container dfef4f0c36707c44f343d54d00d298f6eb84273e0a721f90fb05aeffbb1cbb26
 not ready, running or paused, cannot send a signal" arch=amd64 command=kill container=dfef4f0c36707c44f343d54d00d298f6eb84273e0a721f90fb05aeffbb1cbb26 name=kata-runtime pid=10137 sandbox=dfef4f0c36707c44f343d54d00d298f6eb84273e0a721f90fb05aeffbb1cbb26 sourc
e=runtime
time="2018-08-21T03:39:38.173453866Z" level=warning msg="fetch sandbox device failed" arch=amd64 command=create container=5d42fd7
90048f4eb5a56c4d64019e4fe9c07525f6a6db6bf7423ea3a9aa09821 error="open /run/vc/sbs/5d42fd790048f4eb5a56c4d64019e4fe9c07525f6a6db6b
f7423ea3a9aa09821/devices.json: no such file or directory" name=kata-runtime pid=21193 sandbox=5d42fd790048f4eb5a56c4d64019e4fe9c07525f6a6db6bf7423ea3a9aa09821 sandboxid=5d42fd790048f4eb5a56c4d64019e4fe9c07525f6a6db6bf7423ea3a9aa09821 source=virtcontainers subsystem=sandbox
time="2018-08-21T03:39:40.01400545Z" level=warning msg="unsupported address" address="fe80::9c38:c1ff:feae:f1f2/64" arch=amd64 co
mmand=create container=5d42fd790048f4eb5a56c4d64019e4fe9c07525f6a6db6bf7423ea3a9aa09821 name=kata-runtime pid=21193 source=virtco
ntainers subsystem=kata_agent unsupported-address-type=ipv6
time="2018-08-21T03:39:40.014950124Z" level=warning msg="unsupported route" arch=amd64 command=create container=5d42fd790048f4eb5
a56c4d64019e4fe9c07525f6a6db6bf7423ea3a9aa09821 destination="fe80::/64" name=kata-runtime pid=21193 source=virtcontainers subsystem=kata_agent unsupported-route-type=ipv6
time="2018-08-21T03:39:40.497285803Z" level=warning msg="Cannot hotplug 1 CPUs, currently this SB has 1 CPUs and the maximum amount of CPUs is 1" arch=amd64 command=create container=394192cd80c3a62511d1314e92d83a0540690f0e073c932317690c6c8d1c0381 name=kata-r
untime pid=21296 sandbox=5d42fd790048f4eb5a56c4d64019e4fe9c07525f6a6db6bf7423ea3a9aa09821 source=virtcontainers subsystem=qemu
time="2018-08-21T03:39:40.49742982Z" level=warning msg="maximum number of vCPUs '1' has been reached" arch=amd64 command=create c
ontainer=394192cd80c3a62511d1314e92d83a0540690f0e073c932317690c6c8d1c0381 name=kata-runtime pid=21296 sandbox=5d42fd790048f4eb5a5
6c4d64019e4fe9c07525f6a6db6bf7423ea3a9aa09821 source=virtcontainers subsystem=qemu
time="2018-08-21T03:42:35.854665406Z" level=error msg="Container not ready, running or paused, impossible to signal the container
" arch=amd64 command=kill container=394192cd80c3a62511d1314e92d83a0540690f0e073c932317690c6c8d1c0381 name=kata-runtime pid=22238
sandbox=5d42fd790048f4eb5a56c4d64019e4fe9c07525f6a6db6bf7423ea3a9aa09821 source=runtime
time="2018-08-21T03:42:35.995752489Z" level=error msg="Container 394192cd80c3a62511d1314e92d83a0540690f0e073c932317690c6c8d1c0381
 not ready, running or paused, cannot send a signal" arch=amd64 command=kill container=394192cd80c3a62511d1314e92d83a0540690f0e07
3c932317690c6c8d1c0381 name=kata-runtime pid=22288 sandbox=5d42fd790048f4eb5a56c4d64019e4fe9c07525f6a6db6bf7423ea3a9aa09821 source=runtime
time="2018-08-21T03:42:37.181531495Z" level=warning msg="Cannot hotplug 1 CPUs, currently this SB has 1 CPUs and the maximum amount of CPUs is 1" arch=amd64 command=create container=37ea029d4bf7f633ed8e0a86e09a51704e60af5ffcfa495276378db8762d244e name=kata-runtime pid=22435 sandbox=5d42fd790048f4eb5a56c4d64019e4fe9c07525f6a6db6bf7423ea3a9aa09821 source=virtcontainers subsystem=qemu
time="2018-08-21T03:42:37.18166515Z" level=warning msg="maximum number of vCPUs '1' has been reached" arch=amd64 command=create container=37ea029d4bf7f633ed8e0a86e09a51704e60af5ffcfa495276378db8762d244e name=kata-runtime pid=22435 sandbox=5d42fd790048f4eb5a5
6c4d64019e4fe9c07525f6a6db6bf7423ea3a9aa09821 source=virtcontainers subsystem=qemu
time="2018-08-21T03:42:41.047308276Z" level=error msg="Container not ready, running or paused, impossible to signal the container" arch=amd64 command=kill container=37ea029d4bf7f633ed8e0a86e09a51704e60af5ffcfa495276378db8762d244e name=kata-runtime pid=22578
sandbox=5d42fd790048f4eb5a56c4d64019e4fe9c07525f6a6db6bf7423ea3a9aa09821 source=runtime
time="2018-08-21T03:42:41.128940606Z" level=error msg="Container 37ea029d4bf7f633ed8e0a86e09a51704e60af5ffcfa495276378db8762d244e
 not ready, running or paused, cannot send a signal" arch=amd64 command=kill container=37ea029d4bf7f633ed8e0a86e09a51704e60af5ffcfa495276378db8762d244e name=kata-runtime pid=22612 sandbox=5d42fd790048f4eb5a56c4d64019e4fe9c07525f6a6db6bf7423ea3a9aa09821 sourc
e=runtime
time="2018-08-21T03:42:54.77352301Z" level=warning msg="Cannot hotplug 1 CPUs, currently this SB has 1 CPUs and the maximum amount of CPUs is 1" arch=amd64 command=create container=fac865910e1687e81430e65c7523de14983f4fec18970672b784170de3b3e561 name=kata-runtime pid=22804 sandbox=5d42fd790048f4eb5a56c4d64019e4fe9c07525f6a6db6bf7423ea3a9aa09821 source=virtcontainers subsystem=qemu
time="2018-08-21T03:42:54.773642612Z" level=warning msg="maximum number of vCPUs '1' has been reached" arch=amd64 command=create
container=fac865910e1687e81430e65c7523de14983f4fec18970672b784170de3b3e561 name=kata-runtime pid=22804 sandbox=5d42fd790048f4eb5a
56c4d64019e4fe9c07525f6a6db6bf7423ea3a9aa09821 source=virtcontainers subsystem=qemu
time="2018-08-21T03:42:58.345182584Z" level=error msg="Container not ready, running or paused, impossible to signal the container
" arch=amd64 command=kill container=fac865910e1687e81430e65c7523de14983f4fec18970672b784170de3b3e561 name=kata-runtime pid=22913
sandbox=5d42fd790048f4eb5a56c4d64019e4fe9c07525f6a6db6bf7423ea3a9aa09821 source=runtime
time="2018-08-21T03:42:58.43397387Z" level=error msg="Container fac865910e1687e81430e65c7523de14983f4fec18970672b784170de3b3e561
not ready, running or paused, cannot send a signal" arch=amd64 command=kill container=fac865910e1687e81430e65c7523de14983f4fec189
70672b784170de3b3e561 name=kata-runtime pid=22947 sandbox=5d42fd790048f4eb5a56c4d64019e4fe9c07525f6a6db6bf7423ea3a9aa09821 source
=runtime
time="2018-08-21T03:43:25.784144791Z" level=warning msg="Cannot hotplug 1 CPUs, currently this SB has 1 CPUs and the maximum amount of CPUs is 1" arch=amd64 command=create container=3d2b0d3b408d21d9e2bf7be4b8327280034f534e9f624c3b21518e0eab05d1a9 name=kata-runtime pid=23359 sandbox=5d42fd790048f4eb5a56c4d64019e4fe9c07525f6a6db6bf7423ea3a9aa09821 source=virtcontainers subsystem=qemu
time="2018-08-21T03:43:25.784275293Z" level=warning msg="maximum number of vCPUs '1' has been reached" arch=amd64 command=create
container=3d2b0d3b408d21d9e2bf7be4b8327280034f534e9f624c3b21518e0eab05d1a9 name=kata-runtime pid=23359 sandbox=5d42fd790048f4eb5a56c4d64019e4fe9c07525f6a6db6bf7423ea3a9aa09821 source=virtcontainers subsystem=qemu
time="2018-08-21T03:43:29.439716239Z" level=error msg="Container not ready, running or paused, impossible to signal the container
" arch=amd64 command=kill container=3d2b0d3b408d21d9e2bf7be4b8327280034f534e9f624c3b21518e0eab05d1a9 name=kata-runtime pid=23468
sandbox=5d42fd790048f4eb5a56c4d64019e4fe9c07525f6a6db6bf7423ea3a9aa09821 source=runtime
time="2018-08-21T03:43:29.521027067Z" level=error msg="Container 3d2b0d3b408d21d9e2bf7be4b8327280034f534e9f624c3b21518e0eab05d1a9
 not ready, running or paused, cannot send a signal" arch=amd64 command=kill container=3d2b0d3b408d21d9e2bf7be4b8327280034f534e9f624c3b21518e0eab05d1a9 name=kata-runtime pid=23503 sandbox=5d42fd790048f4eb5a56c4d64019e4fe9c07525f6a6db6bf7423ea3a9aa09821 source=runtime

Proxy logs

Recent proxy problems found in system journal:

time="2018-08-12T03:47:26.163185237Z" level=fatal msg="accept unix /run/vc/sbs/8f7e94ec4982665407160adf2b8e5cc53084a3d571c4b402ff
e4c2bb9d72ba3e/proxy.sock: use of closed network connection" name=kata-proxy pid=5695 source=proxy
time="2018-08-16T07:36:21.818976515Z" level=fatal msg="accept unix /run/vc/sbs/9b734942c893184a1bc347538823f93c7b671947b86a8e73a4
2b34bd9d889243/proxy.sock: use of closed network connection" name=kata-proxy pid=24076 source=proxy
time="2018-08-16T23:31:30.656362708Z" level=fatal msg="accept unix /run/vc/sbs/c3f0adfeef2bd93f3d5b5cb870f3140e28d0ec7aefbb88de0d
7ef9a3d7ebc4f1/proxy.sock: use of closed network connection" name=kata-proxy pid=22200 source=proxy
time="2018-08-17T00:11:12.66691617Z" level=fatal msg="accept unix /run/vc/sbs/2dca130f6dc83247a92c02144f911d474f73fe2176cd39419b5
0a83fb8fc5d6a/proxy.sock: use of closed network connection" name=kata-proxy pid=27946 source=proxy
time="2018-08-17T00:42:40.684913642Z" level=fatal msg="accept unix /run/vc/sbs/cad2ba26c340b0594cc676d9cad5a67414176465a7d8b5322f
d04d113685f845/proxy.sock: use of closed network connection" name=kata-proxy pid=2791 source=proxy
time="2018-08-17T17:10:12.775971324Z" level=fatal msg="accept unix /run/vc/sbs/7753ba4f03b0e01aa888dcd9b7cf8609584abb53625bd26c96
68e3b1b2e773a8/proxy.sock: use of closed network connection" name=kata-proxy pid=4396 source=proxy
time="2018-08-20T01:01:02.992177156Z" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/c8a3dd63e9a2
708fd00e6953361830dfe9dd172abf5affb3b464dc5ef01ae0d6/kata.sock: use of closed network connection" name=kata-proxy pid=16971 sandb
ox=c8a3dd63e9a2708fd00e6953361830dfe9dd172abf5affb3b464dc5ef01ae0d6 source=proxy
time="2018-08-21T02:12:26.110792948Z" level=warning msg="2018/08/21 02:12:26 [WARN] yamux: Discarding data for stream: 1591\n" co
mponent=yamux name=kata-proxy pid=3828 sandbox=7c88a3d96d296e8bd9c8695ab568eee5076e1798cfc764cda5833c57e63f6eb7 source=proxy
time="2018-08-21T02:30:34.700271119Z" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/7c88a3d96d29
6e8bd9c8695ab568eee5076e1798cfc764cda5833c57e63f6eb7/kata.sock: use of closed network connection" name=kata-proxy pid=3828 sandbo
x=7c88a3d96d296e8bd9c8695ab568eee5076e1798cfc764cda5833c57e63f6eb7 source=proxy
time="2018-08-21T02:38:52.530094088Z" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/4d2cfd654b58
5a03dda3b72f1ca1e845fc8fef4b60808294eb285d45bc9920d5/kata.sock: use of closed network connection" name=kata-proxy pid=4060 sandbo
x=4d2cfd654b585a03dda3b72f1ca1e845fc8fef4b60808294eb285d45bc9920d5 source=proxy
time="2018-08-21T02:48:48.642217807Z" level=fatal msg="failed to handle exit signal" error="close unix @->/run/vc/vm/dfef4f0c3670
7c44f343d54d00d298f6eb84273e0a721f90fb05aeffbb1cbb26/kata.sock: use of closed network connection" name=kata-proxy pid=7801 sandbo
x=dfef4f0c36707c44f343d54d00d298f6eb84273e0a721f90fb05aeffbb1cbb26 source=proxy

Shim logs

No recent shim problems found in system journal.


Container manager details

Have docker

Docker

Output of "docker version":

Client:
 Version:           18.06.0-ce
 API version:       1.38
 Go version:        go1.10.3
 Git commit:        0ffa825
 Built:             Wed Jul 18 19:09:54 2018
 OS/Arch:           linux/amd64
 Experimental:      false
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

Output of "docker info":

Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

Output of "systemctl show docker":

Type=notify
Restart=on-failure
NotifyAccess=main
RestartUSec=100ms
TimeoutStartUSec=infinity
TimeoutStopUSec=1min 30s
RuntimeMaxUSec=infinity
WatchdogUSec=0
WatchdogTimestampMonotonic=0
PermissionsStartOnly=no
RootDirectoryStartOnly=no
RemainAfterExit=no
GuessMainPID=yes
MainPID=0
ControlPID=0
FileDescriptorStoreMax=0
NFileDescriptorStore=0
StatusErrno=0
Result=exit-code
UID=[not set]
GID=[not set]
NRestarts=3
ExecMainStartTimestamp=Tue 2018-08-21 01:51:08 UTC
ExecMainStartTimestampMonotonic=18557445
ExecMainExitTimestamp=Tue 2018-08-21 01:51:10 UTC
ExecMainExitTimestampMonotonic=19672597
ExecMainPID=1210
ExecMainCode=1
ExecMainStatus=1
ExecStart={ path=/usr/bin/dockerd ; argv[]=/usr/bin/dockerd -H fd:// ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pi
d=0 ; code=(null) ; status=0/0 }
ExecReload={ path=/bin/kill ; argv[]=/bin/kill -s HUP $MAINPID ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ;
code=(null) ; status=0/0 }
Slice=system.slice
MemoryCurrent=[not set]
CPUUsageNSec=[not set]
TasksCurrent=[not set]
IPIngressBytes=18446744073709551615
IPIngressPackets=18446744073709551615
IPEgressBytes=18446744073709551615
IPEgressPackets=18446744073709551615
Delegate=yes
DelegateControllers=cpu cpuacct io blkio memory devices pids
CPUAccounting=no
CPUWeight=[not set]
StartupCPUWeight=[not set]
CPUShares=[not set]
StartupCPUShares=[not set]
CPUQuotaPerSecUSec=infinity
IOAccounting=no
IOWeight=[not set]
StartupIOWeight=[not set]
BlockIOAccounting=no
BlockIOWeight=[not set]
StartupBlockIOWeight=[not set]
MemoryAccounting=no
MemoryLow=0
MemoryHigh=infinity
MemoryMax=infinity
MemorySwapMax=infinity
MemoryLimit=infinity
DevicePolicy=auto
TasksAccounting=yes
TasksMax=infinity
IPAccounting=no
UMask=0022
LimitCPU=infinity
LimitCPUSoft=infinity
LimitFSIZE=infinity
LimitFSIZESoft=infinity
LimitDATA=infinity
LimitDATASoft=infinity
LimitSTACK=infinity
LimitSTACKSoft=8388608
LimitCORE=infinity
LimitCORESoft=infinity
LimitRSS=infinity
LimitRSSSoft=infinity
LimitNOFILE=1048576
LimitNOFILESoft=1048576
LimitAS=infinity
LimitASSoft=infinity
LimitNPROC=infinity
LimitNPROCSoft=infinity
LimitMEMLOCK=16777216
LimitMEMLOCKSoft=16777216
LimitLOCKS=infinity
LimitLOCKSSoft=infinity
LimitSIGPENDING=14747
LimitSIGPENDINGSoft=14747
LimitMSGQUEUE=819200
LimitMSGQUEUESoft=819200
LimitNICE=0
LimitNICESoft=0
LimitRTPRIO=0
LimitRTPRIOSoft=0
LimitRTTIME=infinity
LimitRTTIMESoft=infinity
OOMScoreAdjust=0
Nice=0
IOSchedulingClass=0
IOSchedulingPriority=0
CPUSchedulingPolicy=0
CPUSchedulingPriority=0
TimerSlackNSec=50000
CPUSchedulingResetOnFork=no
NonBlocking=no
StandardInput=null
StandardInputData=
StandardOutput=journal
StandardError=inherit
TTYReset=no
TTYVHangup=no
TTYVTDisallocate=no
SyslogPriority=30
SyslogLevelPrefix=yes
SyslogLevel=6
SyslogFacility=3
LogLevelMax=-1
SecureBits=0
CapabilityBoundingSet=cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_set
pcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_modu
le cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_tim
e cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog
cap_wake_alarm cap_block_suspend
AmbientCapabilities=
DynamicUser=no
RemoveIPC=no
MountFlags=
PrivateTmp=no
PrivateDevices=no
ProtectKernelTunables=no
ProtectKernelModules=no
ProtectControlGroups=no
PrivateNetwork=no
PrivateUsers=no
ProtectHome=no
ProtectSystem=no
SameProcessGroup=no
UtmpMode=init
IgnoreSIGPIPE=yes
NoNewPrivileges=no
SystemCallErrorNumber=0
LockPersonality=no
RuntimeDirectoryPreserve=no
RuntimeDirectoryMode=0755
StateDirectoryMode=0755
CacheDirectoryMode=0755
LogsDirectoryMode=0755
ConfigurationDirectoryMode=0755
MemoryDenyWriteExecute=no
RestrictRealtime=no
RestrictNamespaces=no
MountAPIVFS=no
KeyringMode=private
KillMode=process
KillSignal=15
SendSIGKILL=yes
SendSIGHUP=no
Id=docker.service
Names=docker.service
Requires=docker.socket sysinit.target system.slice
Wants=network-online.target
ConsistsOf=docker.socket
Conflicts=shutdown.target
Before=shutdown.target
After=basic.target firewalld.service docker.socket sysinit.target system.slice systemd-journald.socket network-online.target
TriggeredBy=docker.socket
Documentation=https://docs.docker.com
Description=Docker Application Container Engine
LoadState=loaded
ActiveState=failed
SubState=failed
FragmentPath=/lib/systemd/system/docker.service
UnitFileState=disabled
UnitFilePreset=enabled
StateChangeTimestamp=Tue 2018-08-21 01:51:10 UTC
StateChangeTimestampMonotonic=19805148
InactiveExitTimestamp=Tue 2018-08-21 01:51:10 UTC
InactiveExitTimestampMonotonic=19673382
ActiveEnterTimestampMonotonic=0
ActiveExitTimestampMonotonic=0
InactiveEnterTimestamp=Tue 2018-08-21 01:51:10 UTC
InactiveEnterTimestampMonotonic=19799843
CanStart=yes
CanStop=yes
CanReload=yes
CanIsolate=no
StopWhenUnneeded=no
RefuseManualStart=no
RefuseManualStop=no
AllowIsolate=no
DefaultDependencies=yes
OnFailureJobMode=replace
IgnoreOnIsolate=no
NeedDaemonReload=no
JobTimeoutUSec=infinity
JobRunningTimeoutUSec=infinity
JobTimeoutAction=none
ConditionResult=yes
AssertResult=yes
ConditionTimestamp=Tue 2018-08-21 01:51:10 UTC
ConditionTimestampMonotonic=19805133
AssertTimestamp=Tue 2018-08-21 01:51:10 UTC
AssertTimestampMonotonic=19805133
Transient=no
Perpetual=no
StartLimitIntervalUSec=1min
StartLimitBurst=3
StartLimitAction=none
FailureAction=none
SuccessAction=none
InvocationID=1d0cfa481e7741e08a777a774ae7487f
CollectMode=inactive

Have kubectl

Kubernetes

Output of "kubectl version":

Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", G
itTreeState:"clean", BuildDate:"2018-08-07T23:17:28Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?

Output of "kubectl config view":

apiVersion: v1
clusters: []
contexts: []
current-context: ""
kind: Config
preferences: {}
users: []

Output of "systemctl show kubelet":

Type=simple
Restart=always
NotifyAccess=none
RestartUSec=10s
TimeoutStartUSec=1min 30s
TimeoutStopUSec=1min 30s
RuntimeMaxUSec=infinity
WatchdogUSec=0
WatchdogTimestamp=Tue 2018-08-21 01:50:59 UTC
WatchdogTimestampMonotonic=9637844
PermissionsStartOnly=no
RootDirectoryStartOnly=no
RemainAfterExit=no
GuessMainPID=yes
MainPID=696
ControlPID=0
FileDescriptorStoreMax=0
NFileDescriptorStore=0
StatusErrno=0
Result=success
UID=[not set]
GID=[not set]
NRestarts=0
ExecMainStartTimestamp=Tue 2018-08-21 01:50:59 UTC
ExecMainStartTimestampMonotonic=9637803
ExecMainExitTimestampMonotonic=0
ExecMainPID=696
ExecMainCode=0
ExecMainStatus=0
ExecStart={ path=/usr/bin/kubelet ; argv[]=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $
KUBELET_EXTRA_ARGS ; ignore_errors=no ; start_time=[n/a] ; stop_time=[n/a] ; pid=0 ; code=(null) ; status=0/0 }
Slice=system.slice
ControlGroup=/system.slice/kubelet.service
MemoryCurrent=[not set]
CPUUsageNSec=[not set]
TasksCurrent=15
IPIngressBytes=18446744073709551615
IPIngressPackets=18446744073709551615
IPEgressBytes=18446744073709551615
IPEgressPackets=18446744073709551615
Delegate=no
CPUAccounting=no
CPUWeight=[not set]
StartupCPUWeight=[not set]
CPUShares=[not set]
StartupCPUShares=[not set]
CPUQuotaPerSecUSec=infinity
IOAccounting=no
IOWeight=[not set]
StartupIOWeight=[not set]
BlockIOAccounting=no
BlockIOWeight=[not set]
StartupBlockIOWeight=[not set]
MemoryAccounting=no
MemoryLow=0
MemoryHigh=infinity
MemoryMax=infinity
MemorySwapMax=infinity
MemoryLimit=infinity
DevicePolicy=auto
TasksAccounting=yes
TasksMax=4424
IPAccounting=no
Environment=[unprintable] KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml
EnvironmentFile=/var/lib/kubelet/kubeadm-flags.env (ignore_errors=yes)
EnvironmentFile=/etc/default/kubelet (ignore_errors=yes)
UMask=0022
LimitCPU=infinity
LimitCPUSoft=infinity
LimitFSIZE=infinity
LimitFSIZESoft=infinity
LimitDATA=infinity
LimitDATASoft=infinity
LimitSTACK=infinity
LimitSTACKSoft=8388608
LimitCORE=infinity
LimitCORESoft=0
LimitRSS=infinity
LimitRSSSoft=infinity
LimitNOFILE=4096
LimitNOFILESoft=1024
LimitAS=infinity
LimitASSoft=infinity
LimitNPROC=14747
LimitNPROCSoft=14747
LimitMEMLOCK=16777216
LimitMEMLOCKSoft=16777216
LimitLOCKS=infinity
LimitLOCKSSoft=infinity
LimitSIGPENDING=14747
LimitSIGPENDINGSoft=14747
LimitMSGQUEUE=819200
LimitMSGQUEUESoft=819200
LimitNICE=0
LimitNICESoft=0
LimitRTPRIO=0
LimitRTPRIOSoft=0
LimitRTTIME=infinity
LimitRTTIMESoft=infinity
OOMScoreAdjust=0
Nice=0
IOSchedulingClass=0
IOSchedulingPriority=0
CPUSchedulingPolicy=0
CPUSchedulingPriority=0
TimerSlackNSec=50000
CPUSchedulingResetOnFork=no
NonBlocking=no
StandardInput=null
StandardInputData=
StandardOutput=journal
StandardError=inherit
TTYReset=no
TTYVHangup=no
TTYVTDisallocate=no
SyslogPriority=30
SyslogLevelPrefix=yes
SyslogLevel=6
SyslogFacility=3
LogLevelMax=-1
SecureBits=0
CapabilityBoundingSet=cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_set
pcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_modu
le cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_tim
e cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog
cap_wake_alarm cap_block_suspend
AmbientCapabilities=
DynamicUser=no
RemoveIPC=no
MountFlags=
PrivateTmp=no
PrivateDevices=no
ProtectKernelTunables=no
ProtectKernelModules=no
ProtectControlGroups=no
PrivateNetwork=no
PrivateUsers=no
ProtectHome=no
ProtectSystem=no
SameProcessGroup=no
UtmpMode=init
IgnoreSIGPIPE=yes
NoNewPrivileges=no
SystemCallErrorNumber=0
LockPersonality=no
RuntimeDirectoryPreserve=no
RuntimeDirectoryMode=0755
StateDirectoryMode=0755
CacheDirectoryMode=0755
LogsDirectoryMode=0755
ConfigurationDirectoryMode=0755
MemoryDenyWriteExecute=no
RestrictRealtime=no
RestrictNamespaces=no
MountAPIVFS=no
KeyringMode=private
KillMode=control-group
KillSignal=15
SendSIGKILL=yes
SendSIGHUP=no
Id=kubelet.service
Names=kubelet.service
Requires=system.slice sysinit.target
WantedBy=multi-user.target
Conflicts=shutdown.target
Before=shutdown.target multi-user.target
After=sysinit.target basic.target system.slice systemd-journald.socket
Documentation=https://kubernetes.io/docs/home/
Description=kubelet: The Kubernetes Node Agent
LoadState=loaded
ActiveState=active
SubState=running
FragmentPath=/lib/systemd/system/kubelet.service
DropInPaths=/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
UnitFileState=enabled
UnitFilePreset=enabled
StateChangeTimestamp=Tue 2018-08-21 01:50:59 UTC
StateChangeTimestampMonotonic=9637845
InactiveExitTimestamp=Tue 2018-08-21 01:50:59 UTC
InactiveExitTimestampMonotonic=9637845
ActiveEnterTimestamp=Tue 2018-08-21 01:50:59 UTC
ActiveEnterTimestampMonotonic=9637845
ActiveExitTimestampMonotonic=0
InactiveEnterTimestampMonotonic=0
CanStart=yes
CanStop=yes
CanReload=no
CanIsolate=no
StopWhenUnneeded=no
RefuseManualStart=no
RefuseManualStop=no
AllowIsolate=no
DefaultDependencies=yes
OnFailureJobMode=replace
IgnoreOnIsolate=no
NeedDaemonReload=no
JobTimeoutUSec=infinity
JobRunningTimeoutUSec=infinity
JobTimeoutAction=none
ConditionResult=yes
AssertResult=yes
ConditionTimestamp=Tue 2018-08-21 01:50:59 UTC
ConditionTimestampMonotonic=9620125
AssertTimestamp=Tue 2018-08-21 01:50:59 UTC
AssertTimestampMonotonic=9620125
Transient=no
Perpetual=no
StartLimitIntervalUSec=0
StartLimitBurst=5
StartLimitAction=none
FailureAction=none
SuccessAction=none
InvocationID=1444ee4a87474c50864600d42364a2c2
CollectMode=inactive

No crio


Packages

Have dpkg
Output of "dpkg -l|egrep "(cc-oci-runtimecc-runtimerunv|kata-proxy|kata-runtime|kata-shim|kata-containers-image|linux-container| qemu-)"":

rc  qemu-system-common             1:2.11+dfsg-1ubuntu7.4            amd64        QEMU full system emulation binaries (common fil
es)
rc  qemu-system-x86                1:2.11+dfsg-1ubuntu7.4            amd64        QEMU full system emulation binaries (x86)

No rpm


Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions