Skip to content
This repository was archived by the owner on May 12, 2021. It is now read-only.

vc: Use BlockIndexMap instead of BlockIndex#2008

Merged
devimc merged 1 commit intokata-containers:masterfrom
darfux:use_block_index_map_instead
Mar 4, 2020
Merged

vc: Use BlockIndexMap instead of BlockIndex#2008
devimc merged 1 commit intokata-containers:masterfrom
darfux:use_block_index_map_instead

Conversation

@darfux
Copy link
Copy Markdown
Contributor

@darfux darfux commented Aug 28, 2019

This allows to reuse detached block index and ensures that the index will not reach the limit of device(such as maxSCSIDevices) after restarting containers many times in one pod.

I'm not sure whether it is ok to reuse the index for all the block drivers and would this change break the persist feature.

Fixes: #2007
Signed-off-by: Li Yuxuan liyuxuan04@baidu.com

Copy link
Copy Markdown

@jodh-intel jodh-intel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @darfux.

lgtm

/cc @WeiZhang555 for the persist changes.

@darfux darfux force-pushed the use_block_index_map_instead branch 3 times, most recently from c3a400e to f1c9f09 Compare September 4, 2019 02:45
@darfux
Copy link
Copy Markdown
Contributor Author

darfux commented Sep 4, 2019

Hi @WeiZhang555 , should I do something for persistence as I've changed the HypervisorState and SandboxState?

@WeiZhang555
Copy link
Copy Markdown
Member

WeiZhang555 commented Sep 9, 2019

@darfux You already did this at https://github.com/kata-containers/runtime/pull/2008/files#diff-f433efe7313cabd358c21fa927ca0c2cR33

Theoretically you can't modify/delete existing field, you can only deprecate old field and add a new field, and you have to take good care of the compatibility.

But you're lucky that the persistent API haven't been settle down before kata's future 2.0 version, so the change is acceptable so far.

And I'm +1 on you change from slices to map 😄

Copy link
Copy Markdown
Member

@WeiZhang555 WeiZhang555 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One small comment. Otherwise LGTM

@darfux darfux force-pushed the use_block_index_map_instead branch from f1c9f09 to f67765c Compare September 10, 2019 06:10
@darfux darfux changed the title [RFC] vc: Use BlockIndexMap instead of BlockIndex vc: Use BlockIndexMap instead of BlockIndex Sep 10, 2019
@darfux darfux force-pushed the use_block_index_map_instead branch from f67765c to 78505df Compare September 10, 2019 06:27
@darfux
Copy link
Copy Markdown
Contributor Author

darfux commented Sep 10, 2019

I've changed the type to map[int]struct{}, PTAL😁
/cc @WeiZhang555 @jodh-intel

@GabyCT
Copy link
Copy Markdown
Contributor

GabyCT commented Sep 10, 2019

/test

@codecov
Copy link
Copy Markdown

codecov bot commented Sep 10, 2019

Codecov Report

❗ No coverage uploaded for pull request base (master@a1dcaac). Click here to learn what that means.
The diff coverage is 66.66%.

@@            Coverage Diff            @@
##             master    #2008   +/-   ##
=========================================
  Coverage          ?   50.62%           
=========================================
  Files             ?      116           
  Lines             ?    16611           
  Branches          ?        0           
=========================================
  Hits              ?     8409           
  Misses            ?     7174           
  Partials          ?     1028

@amshinde
Copy link
Copy Markdown
Member

I'm not sure whether it is ok to reuse the index for all the block drivers and would this change break the persist feature.
@darfux We have tests in place for virtio-block as well. You can make sure this works by adding a test that restarts containers in a pod.
cc @GabyCT

@darfux
Copy link
Copy Markdown
Contributor Author

darfux commented Sep 11, 2019

Thanks @amshinde . I'm not familiar with the tests repo, should I add a new test script under directory such as https://github.com/kata-containers/tests/tree/master/integration/stability or just add codes to some file that was already aimed to do such thing?

@amshinde
Copy link
Copy Markdown
Member

Yes, we you can create a new file there. We do not have any tests for verifying container restarts iirc.
@GabyCT Can you confirm.

@jcvenegas
Copy link
Copy Markdown
Member

@amshinde when you mean a restart is use, https://docs.docker.com/engine/reference/commandline/restart/ ? if is docker we already have a test,
if is k8s I think k8s restart all the pod. If is at standalone test level the test can be done here
https://github.com/kata-containers/tests/tree/master/functional

@darfux
Copy link
Copy Markdown
Contributor Author

darfux commented Oct 8, 2019

@jcvenegas I think docker restart doesn't cover the case that restart single container inside a pod. The kubelet would restart an exited container according to the restart policy, and we can test similar thing when integrating with containerd. I'm not sure it can be done at standalone level.

@darfux darfux changed the title vc: Use BlockIndexMap instead of BlockIndex [WIP]vc: Use BlockIndexMap instead of BlockIndex Oct 11, 2019
@jodh-intel
Copy link
Copy Markdown

This PR needs more 👀 on it 😄

@darfux
Copy link
Copy Markdown
Contributor Author

darfux commented Nov 12, 2019

This PR needs more on it

@jodh-intel I'm stuck on how to test container restarting in pod and looking through the tests repo off and on >_<

@darfux darfux force-pushed the use_block_index_map_instead branch 3 times, most recently from d1da2b7 to c9d909d Compare December 5, 2019 07:09
@WeiZhang555
Copy link
Copy Markdown
Member

/test

WeiZhang555
WeiZhang555 previously approved these changes Dec 9, 2019
Copy link
Copy Markdown
Member

@WeiZhang555 WeiZhang555 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM once CI is happy

@WeiZhang555 WeiZhang555 dismissed their stale review December 9, 2019 02:56

waiting for test result

@darfux
Copy link
Copy Markdown
Contributor Author

darfux commented Dec 9, 2019

Thanks @WeiZhang555 . This PR is waiting for kata-containers/tests#2140 to be merged at first. And could you please restart the failed CI cases ? I'm not sure whether them are related to this PR. Such as 35 ctr /etc/resolv.conf is ok under debian-10 but not for ubuntu-18-04...

@WeiZhang555
Copy link
Copy Markdown
Member

/test

@darfux darfux force-pushed the use_block_index_map_instead branch 2 times, most recently from 77deeed to 9cfe120 Compare January 21, 2020 11:21
@darfux
Copy link
Copy Markdown
Contributor Author

darfux commented Jan 21, 2020

@jodh-intel Thanks, updated :)

@jodh-intel
Copy link
Copy Markdown

@darfux - great!

Ping @kata-containers/runtime - this needs another review folks.

@devimc
Copy link
Copy Markdown

devimc commented Jan 21, 2020

/test

@devimc
Copy link
Copy Markdown

devimc commented Jan 21, 2020

restarting containerd CI

not ok 1 Setting sysctl
# (in test file k8s-sysctls.bats, line 22)
#   `kubectl wait --for=condition=Ready pod "$pod_name"' failed
# INFO: k8s configured to use runtimeclass
# pod/sysctl-test created
# error: timed out waiting for the condition on pods/sysctl-test
# pod "sysctl-test" deleted
Failed at 80: bats "${K8S_TEST_ENTRY}"

@devimc
Copy link
Copy Markdown

devimc commented Jan 21, 2020

same error in containerd CI, @darfux please take a look

not ok 1 Setting sysctl
# (in test file k8s-sysctls.bats, line 22)
#   `kubectl wait --for=condition=Ready pod "$pod_name"' failed
# INFO: k8s configured to use runtimeclass
# pod/sysctl-test created
# error: timed out waiting for the condition on pods/sysctl-test
# pod "sysctl-test" deleted
Failed at 80: bats "${K8S_TEST_ENTRY}"

@darfux
Copy link
Copy Markdown
Contributor Author

darfux commented Jan 22, 2020

Thanks @devimc . I can pass the same case in my environment (without block-volume, k8s-scale-nginx, uts+ipc-ns, niginx cases due to connection issue). So did I miss something when doing the test or maybe the high load of CI cluster cause the case timed out in 30s? 😂

Here is the log:

Details
CRI_RUNTIME="containerd" bash -c "make kubernetes"
bash -f .ci/install_bats.sh
/usr/bin/bats
bash -f integration/kubernetes/run_kubernetes_tests.sh
+ source /etc/os-release
++ NAME=Ubuntu
++ VERSION='18.04.1 LTS (Bionic Beaver)'
++ ID=ubuntu
++ ID_LIKE=debian
++ PRETTY_NAME='Ubuntu 18.04.1 LTS'
++ VERSION_ID=18.04
++ HOME_URL=https://www.ubuntu.com/
++ SUPPORT_URL=https://help.ubuntu.com/
++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/
++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy
++ VERSION_CODENAME=bionic
++ UBUNTU_CODENAME=bionic
+++ readlink -f integration/kubernetes/run_kubernetes_tests.sh
++ dirname /root/go/src/github.com/kata-containers/tests/integration/kubernetes/run_kubernetes_tests.sh
+ kubernetes_dir=/root/go/src/github.com/kata-containers/tests/integration/kubernetes
+ cidir=/root/go/src/github.com/kata-containers/tests/integration/kubernetes/../../.ci/
+ source /root/go/src/github.com/kata-containers/tests/integration/kubernetes/../../.ci//lib.sh
++ export KATA_RUNTIME=kata-runtime
++ KATA_RUNTIME=kata-runtime
++ export KATA_KSM_THROTTLER=no
++ KATA_KSM_THROTTLER=no
++ export KATA_NEMU_DESTDIR=/usr
++ KATA_NEMU_DESTDIR=/usr
++ export KATA_QEMU_DESTDIR=/usr
++ KATA_QEMU_DESTDIR=/usr
++ export KATA_ETC_CONFIG_PATH=/etc/kata-containers/configuration.toml
++ KATA_ETC_CONFIG_PATH=/etc/kata-containers/configuration.toml
++ KATA_KSM_THROTTLER_JOB=kata-ksm-throttler
++ export KATA_DOCKER_TIMEOUT=30
++ KATA_DOCKER_TIMEOUT=30
++ export KATA_NET_TIMEOUT=30
++ KATA_NET_TIMEOUT=30
++ command -v go
++ export GOPATH=/root/go
++ GOPATH=/root/go
++ tests_repo=github.com/kata-containers/tests
++ lib_script=/root/go/src/github.com/kata-containers/tests/lib/common.bash
++ source /root/go/src/github.com/kata-containers/tests/lib/common.bash
+++ VC_POD_DIR=/run/vc/sbs
+++ RUN_SBS_DIR=/run/vc/sbs
+++ KATA_TESTS_BASEDIR=/var/log/kata-tests
+++ KATA_TESTS_LOGDIR=/var/log/kata-tests/logs
+++ KATA_TESTS_DATADIR=/var/log/kata-tests/data
+++ KATA_TESTS_CACHEDIR=/var/log/kata-tests/cache
+++ KATA_HYPERVISOR=qemu
+++ trap 'handle_error $LINENO' ERR
+++ arch
++ export KATA_OBS_REPO_BASE=http://download.opensuse.org/repositories/home:/katacontainers:/releases:/x86_64:/master
++ KATA_OBS_REPO_BASE=http://download.opensuse.org/repositories/home:/katacontainers:/releases:/x86_64:/master
++ jenkins_url=http://jenkins.katacontainers.io
++ cached_artifacts_path=lastSuccessfulBuild/artifact/artifacts
++ uname -m
+ arch=x86_64
+ KATA_HYPERVISOR=qemu
+ '[' ubuntu '!=' ubuntu ']'
+ '[' qemu == firecracker ']'
+ trap '${kubernetes_dir}/cleanup_env.sh' EXIT
+ systemctl is-active --quiet docker
+ K8S_TEST_UNION=("k8s-attach-handlers.bats" "k8s-configmap.bats" "k8s-copy-file.bats" "k8s-cpu-ns.bats" "k8s-credentials-secrets.bats" "k8s-custom-dns.bats" "k8s-empty-dirs.bats" "k8s-env.bats" "k8s-expose-ip.bats" "k8s-job.bats" "k8s-limit-range.bats" "k8s-liveness-probes.bats" "k8s-memory.bats" "k8s-parallel.bats" "k8s-pid-ns.bats" "k8s-pod-quota.bats" "k8s-port-forward.bats" "k8s-projected-volume.bats" "k8s-qos-pods.bats" "k8s-replication.bats" "k8s-security-context.bats" "k8s-shared-volume.bats" "k8s-sysctls.bats" "k8s-number-cpus.bats" "k8s-volume.bats" "k8s-hugepages.bats")
+ '[' -f /root/go/src/github.com/kata-containers/tests/integration/kubernetes/../../.ci//x86_64/configuration_x86_64.yaml ']'
+ pushd /root/go/src/github.com/kata-containers/tests/integration/kubernetes
~/go/src/github.com/kata-containers/tests/integration/kubernetes ~/go/src/github.com/kata-containers/tests
+ ./init.sh
+ set -o errexit
+ set -o nounset
+ set -o pipefail
+++ readlink -f ./init.sh
++ dirname /root/go/src/github.com/kata-containers/tests/integration/kubernetes/init.sh
+ SCRIPT_PATH=/root/go/src/github.com/kata-containers/tests/integration/kubernetes
+ source /root/go/src/github.com/kata-containers/tests/integration/kubernetes/../../.ci/lib.sh
++ export KATA_RUNTIME=kata-runtime
++ KATA_RUNTIME=kata-runtime
++ export KATA_KSM_THROTTLER=no
++ KATA_KSM_THROTTLER=no
++ export KATA_NEMU_DESTDIR=/usr
++ KATA_NEMU_DESTDIR=/usr
++ export KATA_QEMU_DESTDIR=/usr
++ KATA_QEMU_DESTDIR=/usr
++ export KATA_ETC_CONFIG_PATH=/etc/kata-containers/configuration.toml
++ KATA_ETC_CONFIG_PATH=/etc/kata-containers/configuration.toml
++ KATA_KSM_THROTTLER_JOB=kata-ksm-throttler
++ export KATA_DOCKER_TIMEOUT=30
++ KATA_DOCKER_TIMEOUT=30
++ export KATA_NET_TIMEOUT=30
++ KATA_NET_TIMEOUT=30
++ command -v go
++ export GOPATH=/root/go
++ GOPATH=/root/go
++ tests_repo=github.com/kata-containers/tests
++ lib_script=/root/go/src/github.com/kata-containers/tests/lib/common.bash
++ source /root/go/src/github.com/kata-containers/tests/lib/common.bash
+++ VC_POD_DIR=/run/vc/sbs
+++ RUN_SBS_DIR=/run/vc/sbs
+++ KATA_TESTS_BASEDIR=/var/log/kata-tests
+++ KATA_TESTS_LOGDIR=/var/log/kata-tests/logs
+++ KATA_TESTS_DATADIR=/var/log/kata-tests/data
+++ KATA_TESTS_CACHEDIR=/var/log/kata-tests/cache
+++ KATA_HYPERVISOR=qemu
+++ trap 'handle_error $LINENO' ERR
+++ arch
++ export KATA_OBS_REPO_BASE=http://download.opensuse.org/repositories/home:/katacontainers:/releases:/x86_64:/master
++ KATA_OBS_REPO_BASE=http://download.opensuse.org/repositories/home:/katacontainers:/releases:/x86_64:/master
++ jenkins_url=http://jenkins.katacontainers.io
++ cached_artifacts_path=lastSuccessfulBuild/artifact/artifacts
+ source /root/go/src/github.com/kata-containers/tests/integration/kubernetes/../../lib/common.bash
++ VC_POD_DIR=/run/vc/sbs
++ RUN_SBS_DIR=/run/vc/sbs
++ KATA_TESTS_BASEDIR=/var/log/kata-tests/logs
++ KATA_TESTS_LOGDIR=/var/log/kata-tests/logs
++ KATA_TESTS_DATADIR=/var/log/kata-tests/data
++ KATA_TESTS_CACHEDIR=/var/log/kata-tests/cache
++ KATA_HYPERVISOR=qemu
++ trap 'handle_error $LINENO' ERR
+ source /etc/os-release
++ NAME=Ubuntu
++ VERSION='18.04.1 LTS (Bionic Beaver)'
++ ID=ubuntu
++ ID_LIKE=debian
++ PRETTY_NAME='Ubuntu 18.04.1 LTS'
++ VERSION_ID=18.04
++ HOME_URL=https://www.ubuntu.com/
++ SUPPORT_URL=https://help.ubuntu.com/
++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/
++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy
++ VERSION_CODENAME=bionic
++ UBUNTU_CODENAME=bionic
+ RUNTIME=kata-runtime
++ command -v kata-runtime
+ RUNTIME_PATH=/usr/local/bin/kata-runtime
+ system_pod_wait_time=120
+ sleep_time=5
+ cri_runtime=containerd
++ get_version externals.kubernetes.version
++ dependency=externals.kubernetes.version
++ runtime_repo=github.com/kata-containers/runtime
++ runtime_repo_dir=/root/go/src/github.com/kata-containers/runtime
++ versions_file=/root/go/src/github.com/kata-containers/runtime/versions.yaml
+++ dirname /root/go/src/github.com/kata-containers/runtime
++ mkdir -p /root/go/src/github.com/kata-containers
++ '[' -d /root/go/src/github.com/kata-containers/runtime ']'
++ get_dep_from_yaml_db /root/go/src/github.com/kata-containers/runtime/versions.yaml externals.kubernetes.version
++ local versions_file=/root/go/src/github.com/kata-containers/runtime/versions.yaml
++ local dependency=externals.kubernetes.version
++ '[' '!' -f /root/go/src/github.com/kata-containers/runtime/versions.yaml ']'
++ /root/go/src/github.com/kata-containers/tests/.ci/install_yq.sh
+++ /root/go/bin/yq read /root/go/src/github.com/kata-containers/runtime/versions.yaml externals.kubernetes.version
++ result=1.16.2-00
++ '[' 1.16.2-00 = null ']'
++ echo 1.16.2-00
+ kubernetes_version=1.16.2-00
+ BAREMETAL=false
+ iptables_cache=/var/log/kata-tests/data/iptables_cache
+ '[' false == true ']'
+ '[' ubuntu == fedora ']'
+ case "${cri_runtime}" in
+ cri_runtime_socket=/run/containerd/containerd.sock
+ check_processes
+ extract_kata_env
+ local toml
++ get_docker_kata_path kata-runtime
+++ sudo docker info --format '{{json .Runtimes}}'
++ local 'jpaths={"kata-runtime":{"path":"/usr/local/bin/kata-runtime"},"runc":{"path":"docker-runc"}}'
+++ jq '."kata-runtime".path'
++ local 'rpath="/usr/local/bin/kata-runtime"'
++ rpath='"/usr/local/bin/kata-runtime'
++ rpath=/usr/local/bin/kata-runtime
++ echo /usr/local/bin/kata-runtime
+ local rpath=/usr/local/bin/kata-runtime
+ '[' -n /usr/local/bin/kata-runtime ']'
++ command -v /usr/local/bin/kata-runtime
+ rpath=/usr/local/bin/kata-runtime
+ '[' -x /usr/local/bin/kata-runtime ']'
++ /usr/local/bin/kata-runtime kata-env
++ echo 0
+ local is_valid=0
+ '[' 0 == 0 ']'
++ /usr/local/bin/kata-runtime kata-env
+ local 'toml=[Meta]
  Version = "1.0.23"

[Runtime]
  Debug = true
  Trace = false
  DisableGuestSeccomp = true
  DisableNewNetNs = false
  SandboxCgroupOnly = false
  Path = "/usr/local/bin/kata-runtime"
  [Runtime.Version]
    Semver = "1.10.0-rc0"
    Commit = "9cfe120361e20e149f30263b146ff4ecc08c43d3"
    OCI = "1.0.1-dev"
  [Runtime.Config]
    Path = "/usr/share/defaults/kata-containers/configuration.toml"

[Hypervisor]
  MachineType = "pc"
  Version = "QEMU emulator version 4.1.1 (kata-static)\nCopyright (c) 2003-2019 Fabrice Bellard and the QEMU Project developers"
  Path = "/usr/bin/qemu-system-x86_64"
  BlockDeviceDriver = "virtio-scsi"
  EntropySource = "/dev/urandom"
  Msize9p = 8192
  MemorySlots = 10
  Debug = true
  UseVSock = false
  SharedFS = "virtio-9p"

[Image]
  Path = "/usr/share/kata-containers/kata-containers-clearlinux-32150-osbuilder-7526f49-agent-7c2d8ab.img"

[Kernel]
  Path = "/usr/share/kata-containers/vmlinuz-4.19.86-62"
  Parameters = "systemd.unit=kata-containers.target systemd.mask=systemd-networkd.service systemd.mask=systemd-networkd.socket agent.log=debug agent.log=debug"

[Initrd]
  Path = ""

[Proxy]
  Type = "kataProxy"
  Version = "kata-proxy version 1.10.0-rc0-a101c3da6f36167fbaa6daa72f8caf8b8c5230f0"
  Path = "/usr/libexec/kata-containers/kata-proxy"
  Debug = true

[Shim]
  Type = "kataShim"
  Version = "kata-shim version 1.10.0-rc0-7e0e427efe81cc2ce7e8298178a72183a9811698"
  Path = "/usr/libexec/kata-containers/kata-shim"
  Debug = true

[Agent]
  Type = "kata"
  Debug = true
  Trace = false
  TraceMode = ""
  TraceType = ""

[Host]
  Kernel = "4.15.0-29-generic"
  Architecture = "amd64"
  VMContainerCapable = true
  SupportVSocks = true
  [Host.Distro]
    Name = "Ubuntu"
    Version = "18.04"
  [Host.CPU]
    Vendor = "GenuineIntel"
    Model = "Intel(R) Xeon(R) Gold 5117 CPU @ 2.00GHz"

[Netmon]
  Version = "kata-netmon version 1.10.0-rc0"
  Path = "/usr/libexec/kata-containers/kata-netmon"
  Debug = true
  Enable = false'
++ awk '/^  \[Runtime.Config\]$/ {foundit=1} /^    Path =/ { if (foundit==1) {print $3; foundit=0} } '
++ sed 's/"//g'
+ RUNTIME_CONFIG_PATH=/usr/share/defaults/kata-containers/configuration.toml
++ awk '/^  \[Runtime.Version\]$/ {foundit=1} /^    Semver =/ { if (foundit==1) {print $3; foundit=0} } '
++ sed 's/"//g'
+ RUNTIME_VERSION=1.10.0-rc0
++ awk '/^  \[Runtime.Version\]$/ {foundit=1} /^    Commit =/ { if (foundit==1) {print $3; foundit=0} } '
++ sed 's/"//g'
+ RUNTIME_COMMIT=9cfe120361e20e149f30263b146ff4ecc08c43d3
++ awk '/^\[Runtime\]$/ {foundit=1} /^  Path =/ { if (foundit==1) {print $3; foundit=0} } '
++ sed 's/"//g'
+ RUNTIME_PATH=/usr/local/bin/kata-runtime
++ awk '/^\[Shim\]$/ {foundit=1} /^  Path =/ { if (foundit==1) {print $3; foundit=0} } '
++ sed 's/"//g'
+ SHIM_PATH=/usr/libexec/kata-containers/kata-shim
++ awk '/^\[Shim\]$/ {foundit=1} /^  Version =/ { if (foundit==1) {$1=$2=""; print $0; foundit=0} } '
++ sed 's/"//g'
+ SHIM_VERSION='  kata-shim version 1.10.0-rc0-7e0e427efe81cc2ce7e8298178a72183a9811698'
++ awk '/^\[Proxy\]$/ {foundit=1} /^  Path =/ { if (foundit==1) {print $3; foundit=0} } '
++ sed 's/"//g'
+ PROXY_PATH=/usr/libexec/kata-containers/kata-proxy
++ awk '/^\[Proxy\]$/ {foundit=1} /^  Version =/ { if (foundit==1) {print $5; foundit=0} } '
++ sed 's/"//g'
+ PROXY_VERSION=1.10.0-rc0-a101c3da6f36167fbaa6daa72f8caf8b8c5230f0
++ awk '/^\[Hypervisor\]$/ {foundit=1} /^  Path =/ { if (foundit==1) {print $3; foundit=0} } '
++ sed 's/"//g'
+ HYPERVISOR_PATH=/usr/bin/qemu-system-x86_64
++ awk '/^\[Hypervisor\]$/ {foundit=1} /^  Version =/ { if (foundit==1) {$1=$2=""; print $0; foundit=0} } '
++ sed 's/"//g'
+ HYPERVISOR_VERSION='  QEMU emulator version 4.1.1 (kata-static)\nCopyright (c) 2003-2019 Fabrice Bellard and the QEMU Project developers'
++ awk '/^\[Initrd\]$/ {foundit=1} /^  Path =/ { if (foundit==1) {print $3; foundit=0} } '
++ sed 's/"//g'
+ INITRD_PATH=
++ awk '/^\[Netmon\]$/ {foundit=1} /^  Path =/ { if (foundit==1) {print $3; foundit=0} } '
++ sed 's/"//g'
+ NETMON_PATH=/usr/libexec/kata-containers/kata-netmon
+ return 0
+ '[' -x /usr/local/bin/kata-runtime ']'
++ /usr/local/bin/kata-runtime kata-env
++ awk '/UseVSock/ {print $3}'
+ local vsock_configured=false
++ /usr/local/bin/kata-runtime kata-env
++ awk '/SupportVSock/ {print $3}'
+ local vsock_supported=true
+ '[' false == true ']'
+ general_processes=(${PROXY_PATH} ${HYPERVISOR_PATH} ${SHIM_PATH})
+ for i in "${general_processes[@]}"
+ pgrep -f /usr/libexec/kata-containers/kata-proxy
+ for i in "${general_processes[@]}"
+ pgrep -f /usr/bin/qemu-system-x86_64
+ for i in "${general_processes[@]}"
+ pgrep -f /usr/libexec/kata-containers/kata-shim
+ cni_config_dir=/etc/cni/net.d
+ cni_interface=cni0
+ sudo rm -rf '/var/lib/cni/networks/*'
+ sudo rm -rf '/etc/cni/net.d/*'
+ ip a show cni0
Device "cni0" does not exist.
+ echo 'Start containerd service'
Start containerd service
+ sudo systemctl start containerd
+ max_cri_socket_check=5
+ wait_time_cri_socket_check=5
++ seq 5
+ for i in $(seq ${max_cri_socket_check})
+ sleep 5
+ '[' -e /run/containerd/containerd.sock ']'
+ break
+ sudo systemctl status containerd --no-pager
● containerd.service - containerd container runtime
   Loaded: loaded (/etc/systemd/system/containerd.service; disabled; vendor preset: enabled)
  Drop-In: /etc/systemd/system/containerd.service.d
           └─http-proxy.conf
   Active: active (running) since Wed 2020-01-22 23:02:00 CST; 5s ago
     Docs: https://containerd.io
  Process: 3271 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
 Main PID: 3272 (containerd)
    Tasks: 20 (limit: 7372)
   CGroup: /system.slice/containerd.service
           └─3272 /usr/local/bin/containerd

Jan 22 23:02:00 ubuntu containerd[3272]: time="2020-01-22T23:02:00.315247940+08:00" level=error msg="Failed to load cni during init, please…ni config"
Jan 22 23:02:00 ubuntu containerd[3272]: time="2020-01-22T23:02:00.315553469+08:00" level=info msg="loading plugin \"io.containerd.grpc.v1.…rd.grpc.v1
Jan 22 23:02:00 ubuntu containerd[3272]: time="2020-01-22T23:02:00.315898171+08:00" level=info msg=serving... address=/run/containerd/conta…sock.ttrpc
Jan 22 23:02:00 ubuntu containerd[3272]: time="2020-01-22T23:02:00.316020807+08:00" level=info msg=serving... address=/run/containerd/containerd.sock
Jan 22 23:02:00 ubuntu containerd[3272]: time="2020-01-22T23:02:00.316034616+08:00" level=info msg="containerd successfully booted in 0.042782s"
Jan 22 23:02:00 ubuntu containerd[3272]: time="2020-01-22T23:02:00.316043094+08:00" level=info msg="Start subscribing containerd event"
Jan 22 23:02:00 ubuntu containerd[3272]: time="2020-01-22T23:02:00.316229404+08:00" level=info msg="Start recovering state"
Jan 22 23:02:00 ubuntu containerd[3272]: time="2020-01-22T23:02:00.357593180+08:00" level=info msg="Start event monitor"
Jan 22 23:02:00 ubuntu containerd[3272]: time="2020-01-22T23:02:00.357661975+08:00" level=info msg="Start snapshots syncer"
Jan 22 23:02:00 ubuntu containerd[3272]: time="2020-01-22T23:02:00.357673511+08:00" level=info msg="Start streaming server"
Hint: Some lines were ellipsized, use -l to show in full.
+ echo 'Init cluster using /run/containerd/containerd.sock'
Init cluster using /run/containerd/containerd.sock
+ kubeadm_config_template=/root/go/src/github.com/kata-containers/tests/integration/kubernetes/kubeadm/config.yaml
++ mktemp --tmpdir kubeadm_config.XXXXXX.yaml
+ kubeadm_config_file=/tmp/kubeadm_config.vszHcI.yaml
+ sed -e 's|CRI_RUNTIME_SOCKET|/run/containerd/containerd.sock|' /root/go/src/github.com/kata-containers/tests/integration/kubernetes/kubeadm/config.yaml
+ sed -i 's|KUBERNETES_VERSION|v1.16.2|' /tmp/kubeadm_config.vszHcI.yaml
+ '[' false == true ']'
+ sudo -E kubeadm init --config /tmp/kubeadm_config.vszHcI.yaml
[init] Using Kubernetes version: v1.16.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [ubuntu kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.122.195]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ubuntu localhost] and IPs [192.168.122.195 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ubuntu localhost] and IPs [192.168.122.195 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 16.505108 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node ubuntu as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node ubuntu as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 7kqojr.tuawhogs4g2fpjnf
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.122.195:6443 --token 7kqojr.tuawhogs4g2fpjnf \
    --discovery-token-ca-cert-hash sha256:b97cc136f29fc990d2533fa8a896007e9afca54b7deb329298508f62c9d2fd25 
+ mkdir -p /root/.kube
+ sudo cp /etc/kubernetes/admin.conf /root/.kube/config
++ id -u
++ id -g
+ sudo chown 0:0 /root/.kube/config
+ export KUBECONFIG=/root/.kube/config
+ KUBECONFIG=/root/.kube/config
+ kubectl get nodes
NAME     STATUS     ROLES    AGE   VERSION
ubuntu   NotReady   master   3s    v1.16.2
+ kubectl get pods
No resources found in default namespace.
++ get_test_version externals.flannel.version
++ local dependency=externals.flannel.version
++ local db
++ local cidir
+++ dirname /root/go/src/github.com/kata-containers/tests/integration/kubernetes/../../.ci/lib.sh
++ local cidir=/root/go/src/github.com/kata-containers/tests/integration/kubernetes/../../.ci
++ db=/root/go/src/github.com/kata-containers/tests/integration/kubernetes/../../.ci/../versions.yaml
++ get_dep_from_yaml_db /root/go/src/github.com/kata-containers/tests/integration/kubernetes/../../.ci/../versions.yaml externals.flannel.version
++ local versions_file=/root/go/src/github.com/kata-containers/tests/integration/kubernetes/../../.ci/../versions.yaml
++ local dependency=externals.flannel.version
++ '[' '!' -f /root/go/src/github.com/kata-containers/tests/integration/kubernetes/../../.ci/../versions.yaml ']'
++ /root/go/src/github.com/kata-containers/tests/.ci/install_yq.sh
+++ /root/go/bin/yq read /root/go/src/github.com/kata-containers/tests/integration/kubernetes/../../.ci/../versions.yaml externals.flannel.version
++ result=862c448ef28fd890e2ac4e5fddc49e7fe9693b31
++ '[' 862c448ef28fd890e2ac4e5fddc49e7fe9693b31 = null ']'
++ echo 862c448ef28fd890e2ac4e5fddc49e7fe9693b31
+ flannel_version=862c448ef28fd890e2ac4e5fddc49e7fe9693b31
+ flannel_url=https://raw.githubusercontent.com/coreos/flannel/862c448ef28fd890e2ac4e5fddc49e7fe9693b31/Documentation/kube-flannel.yml
++ /root/go/src/github.com/kata-containers/tests/integration/kubernetes/../../.ci/kata-arch.sh
+ arch=x86_64
+ '[' -f /root/go/src/github.com/kata-containers/tests/integration/kubernetes/../../.ci/x86_64/kubernetes/init.sh ']'
+ network_plugin_config=https://raw.githubusercontent.com/coreos/flannel/862c448ef28fd890e2ac4e5fddc49e7fe9693b31/Documentation/kube-flannel.yml
+ HTTP_PROXY=http://10.227.53.30:28123/
+ HTTPS_PROXY=http://10.227.53.30:28123/
+ NO_PROXY=10.0.0.0/8,192.168.0.0/16,127.0.0.1,localhost
+ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/862c448ef28fd890e2ac4e5fddc49e7fe9693b31/Documentation/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created
+ wait_pods_ready
+ local 'pods_status=kubectl get pods --all-namespaces'
+ local 'apiserver_pod=kube-apiserver.*1/1.*Running'
+ local 'controller_pod=kube-controller-manager.*1/1.*Running'
+ local 'etcd_pod=etcd.*1/1.*Running'
+ local 'scheduler_pod=kube-scheduler.*1/1.*Running'
+ system_pod=($apiserver_pod $controller_pod $etcd_pod $scheduler_pod)
+ local system_pod
+ for pod_entry in "${system_pod[@]}"
+ waitForProcess 120 5 'kubectl get pods --all-namespaces | grep kube-apiserver.*1/1.*Running'
+ wait_time=120
+ sleep_time=5
+ cmd='kubectl get pods --all-namespaces | grep kube-apiserver.*1/1.*Running'
+ '[' 120 -gt 0 ']'
+ eval 'kubectl get pods --all-namespaces | grep kube-apiserver.*1/1.*Running'
++ kubectl get pods --all-namespaces
++ grep 'kube-apiserver.*1/1.*Running'
No resources found
+ sleep 5
+ wait_time=115
+ '[' 115 -gt 0 ']'
+ eval 'kubectl get pods --all-namespaces | grep kube-apiserver.*1/1.*Running'
++ kubectl get pods --all-namespaces
++ grep 'kube-apiserver.*1/1.*Running'
No resources found
+ sleep 5
+ wait_time=110
+ '[' 110 -gt 0 ']'
+ eval 'kubectl get pods --all-namespaces | grep kube-apiserver.*1/1.*Running'
++ kubectl get pods --all-namespaces
++ grep 'kube-apiserver.*1/1.*Running'
No resources found
+ sleep 5
+ wait_time=105
+ '[' 105 -gt 0 ']'
+ eval 'kubectl get pods --all-namespaces | grep kube-apiserver.*1/1.*Running'
++ kubectl get pods --all-namespaces
++ grep 'kube-apiserver.*1/1.*Running'
+ sleep 5
+ wait_time=100
+ '[' 100 -gt 0 ']'
+ eval 'kubectl get pods --all-namespaces | grep kube-apiserver.*1/1.*Running'
++ kubectl get pods --all-namespaces
++ grep 'kube-apiserver.*1/1.*Running'
+ sleep 5
+ wait_time=95
+ '[' 95 -gt 0 ']'
+ eval 'kubectl get pods --all-namespaces | grep kube-apiserver.*1/1.*Running'
++ kubectl get pods --all-namespaces
++ grep 'kube-apiserver.*1/1.*Running'
+ sleep 5
+ wait_time=90
+ '[' 90 -gt 0 ']'
+ eval 'kubectl get pods --all-namespaces | grep kube-apiserver.*1/1.*Running'
++ kubectl get pods --all-namespaces
++ grep 'kube-apiserver.*1/1.*Running'
+ sleep 5
+ wait_time=85
+ '[' 85 -gt 0 ']'
+ eval 'kubectl get pods --all-namespaces | grep kube-apiserver.*1/1.*Running'
++ kubectl get pods --all-namespaces
++ grep 'kube-apiserver.*1/1.*Running'
+ sleep 5
+ wait_time=80
+ '[' 80 -gt 0 ']'
+ eval 'kubectl get pods --all-namespaces | grep kube-apiserver.*1/1.*Running'
++ kubectl get pods --all-namespaces
++ grep 'kube-apiserver.*1/1.*Running'
+ sleep 5
+ wait_time=75
+ '[' 75 -gt 0 ']'
+ eval 'kubectl get pods --all-namespaces | grep kube-apiserver.*1/1.*Running'
++ kubectl get pods --all-namespaces
++ grep 'kube-apiserver.*1/1.*Running'
+ sleep 5
+ wait_time=70
+ '[' 70 -gt 0 ']'
+ eval 'kubectl get pods --all-namespaces | grep kube-apiserver.*1/1.*Running'
++ kubectl get pods --all-namespaces
++ grep 'kube-apiserver.*1/1.*Running'
+ sleep 5
+ wait_time=65
+ '[' 65 -gt 0 ']'
+ eval 'kubectl get pods --all-namespaces | grep kube-apiserver.*1/1.*Running'
++ kubectl get pods --all-namespaces
++ grep 'kube-apiserver.*1/1.*Running'
+ sleep 5
+ wait_time=60
+ '[' 60 -gt 0 ']'
+ eval 'kubectl get pods --all-namespaces | grep kube-apiserver.*1/1.*Running'
++ kubectl get pods --all-namespaces
++ grep 'kube-apiserver.*1/1.*Running'
+ sleep 5
+ wait_time=55
+ '[' 55 -gt 0 ']'
+ eval 'kubectl get pods --all-namespaces | grep kube-apiserver.*1/1.*Running'
++ kubectl get pods --all-namespaces
++ grep 'kube-apiserver.*1/1.*Running'
+ sleep 5
+ wait_time=50
+ '[' 50 -gt 0 ']'
+ eval 'kubectl get pods --all-namespaces | grep kube-apiserver.*1/1.*Running'
++ kubectl get pods --all-namespaces
++ grep 'kube-apiserver.*1/1.*Running'
kube-system   kube-apiserver-ubuntu            1/1     Running   0          5s
+ return 0
+ for pod_entry in "${system_pod[@]}"
+ waitForProcess 120 5 'kubectl get pods --all-namespaces | grep kube-controller-manager.*1/1.*Running'
+ wait_time=120
+ sleep_time=5
+ cmd='kubectl get pods --all-namespaces | grep kube-controller-manager.*1/1.*Running'
+ '[' 120 -gt 0 ']'
+ eval 'kubectl get pods --all-namespaces | grep kube-controller-manager.*1/1.*Running'
++ kubectl get pods --all-namespaces
++ grep 'kube-controller-manager.*1/1.*Running'
kube-system   kube-controller-manager-ubuntu   1/1     Running   0          14s
+ return 0
+ for pod_entry in "${system_pod[@]}"
+ waitForProcess 120 5 'kubectl get pods --all-namespaces | grep etcd.*1/1.*Running'
+ wait_time=120
+ sleep_time=5
+ cmd='kubectl get pods --all-namespaces | grep etcd.*1/1.*Running'
+ '[' 120 -gt 0 ']'
+ eval 'kubectl get pods --all-namespaces | grep etcd.*1/1.*Running'
++ kubectl get pods --all-namespaces
++ grep 'etcd.*1/1.*Running'
+ sleep 5
+ wait_time=115
+ '[' 115 -gt 0 ']'
+ eval 'kubectl get pods --all-namespaces | grep etcd.*1/1.*Running'
++ kubectl get pods --all-namespaces
++ grep 'etcd.*1/1.*Running'
+ sleep 5
+ wait_time=110
+ '[' 110 -gt 0 ']'
+ eval 'kubectl get pods --all-namespaces | grep etcd.*1/1.*Running'
++ kubectl get pods --all-namespaces
++ grep 'etcd.*1/1.*Running'
kube-system   etcd-ubuntu                      1/1     Running   0          9s
+ return 0
+ for pod_entry in "${system_pod[@]}"
+ waitForProcess 120 5 'kubectl get pods --all-namespaces | grep kube-scheduler.*1/1.*Running'
+ wait_time=120
+ sleep_time=5
+ cmd='kubectl get pods --all-namespaces | grep kube-scheduler.*1/1.*Running'
+ '[' 120 -gt 0 ']'
+ eval 'kubectl get pods --all-namespaces | grep kube-scheduler.*1/1.*Running'
++ kubectl get pods --all-namespaces
++ grep 'kube-scheduler.*1/1.*Running'
kube-system   kube-scheduler-ubuntu            1/1     Running   0          4s
+ return 0
+ runtimeclass_files_path=/root/go/src/github.com/kata-containers/tests/integration/kubernetes/runtimeclass_workloads
+ echo 'Create kata RuntimeClass resource'
Create kata RuntimeClass resource
+ kubectl create -f /root/go/src/github.com/kata-containers/tests/integration/kubernetes/runtimeclass_workloads/kata-runtimeclass.yaml
runtimeclass.node.k8s.io/kata created
++ hostname
+ kubectl taint nodes ubuntu node-role.kubernetes.io/master:NoSchedule-
node/ubuntu untainted
+ for K8S_TEST_ENTRY in ${K8S_TEST_UNION[@]}
+ bats k8s-attach-handlers.bats
 ✓ Running with postStart and preStop handlers

1 test, 0 failures

+ for K8S_TEST_ENTRY in ${K8S_TEST_UNION[@]}
+ bats k8s-configmap.bats
 ✓ ConfigMap for a pod

1 test, 0 failures

+ for K8S_TEST_ENTRY in ${K8S_TEST_UNION[@]}
+ bats k8s-copy-file.bats
 ✓ Copy file in a pod
 ✓ Copy from pod to host

2 tests, 0 failures

+ for K8S_TEST_ENTRY in ${K8S_TEST_UNION[@]}
+ bats k8s-cpu-ns.bats
 ✓ Check CPU constraints

1 test, 0 failures

+ for K8S_TEST_ENTRY in ${K8S_TEST_UNION[@]}
+ bats k8s-credentials-secrets.bats
 ✓ Credentials using secrets

1 test, 0 failures

+ for K8S_TEST_ENTRY in ${K8S_TEST_UNION[@]}
+ bats k8s-custom-dns.bats
 ✓ Check custom dns

1 test, 0 failures

+ for K8S_TEST_ENTRY in ${K8S_TEST_UNION[@]}
+ bats k8s-empty-dirs.bats
 ✓ Empty dir volumes

1 test, 0 failures

+ for K8S_TEST_ENTRY in ${K8S_TEST_UNION[@]}
+ bats k8s-env.bats
 ✓ Environment variables

1 test, 0 failures

+ for K8S_TEST_ENTRY in ${K8S_TEST_UNION[@]}
+ bats k8s-expose-ip.bats
 ✓ Expose IP Address

1 test, 0 failures

+ for K8S_TEST_ENTRY in ${K8S_TEST_UNION[@]}
+ bats k8s-job.bats
   Run a job to completion                                                                                                                        1/1
1 test, 0 failures, 1 not run

+ for K8S_TEST_ENTRY in ${K8S_TEST_UNION[@]}
+ bats k8s-limit-range.bats
 ✓ Limit range for storage

1 test, 0 failures

+ for K8S_TEST_ENTRY in ${K8S_TEST_UNION[@]}
+ bats k8s-liveness-probes.bats
 ✓ Liveness probe
 ✓ Liveness http probe
 ✓ Liveness tcp probe

3 tests, 0 failures

+ for K8S_TEST_ENTRY in ${K8S_TEST_UNION[@]}
+ bats k8s-memory.bats
 - Exceeding memory constraints (skipped: test not working see: https://github.com/kata-containers/runtime/issues/1127, https://github.com/kata-containers/runtime/issues/1249)
 - Running within memory constraints (skipped: test not working see: https://github.com/kata-containers/runtime/issues/1127, https://github.com/kata-containers/runtime/issues/1249)

2 tests, 0 failures, 2 skipped

+ for K8S_TEST_ENTRY in ${K8S_TEST_UNION[@]}
+ bats k8s-parallel.bats
 ✓ Parallel jobs

1 test, 0 failures

+ for K8S_TEST_ENTRY in ${K8S_TEST_UNION[@]}
+ bats k8s-pid-ns.bats
 ✓ Check PID namespaces

1 test, 0 failures

+ for K8S_TEST_ENTRY in ${K8S_TEST_UNION[@]}
+ bats k8s-pod-quota.bats
 ✓ Pod quota

1 test, 0 failures

+ for K8S_TEST_ENTRY in ${K8S_TEST_UNION[@]}
+ bats k8s-port-forward.bats
   Port forwarding                                                                                                                                1/1
1 test, 0 failures, 1 not run

+ for K8S_TEST_ENTRY in ${K8S_TEST_UNION[@]}
+ bats k8s-projected-volume.bats
 ✓ Projected volume

1 test, 0 failures

+ for K8S_TEST_ENTRY in ${K8S_TEST_UNION[@]}
+ bats k8s-qos-pods.bats
   BestEffort QoS                                                                                                                                 3/3
3 tests, 0 failures, 3 not run

+ for K8S_TEST_ENTRY in ${K8S_TEST_UNION[@]}
+ bats k8s-replication.bats
   Replication controller                                                                                                                         1/1
1 test, 0 failures, 1 not run

+ for K8S_TEST_ENTRY in ${K8S_TEST_UNION[@]}
+ bats k8s-security-context.bats
 ✓ Security context

1 test, 0 failures

+ for K8S_TEST_ENTRY in ${K8S_TEST_UNION[@]}
+ bats k8s-shared-volume.bats
 ✓ Containers with shared volume
 ✓ initContainer with shared volume

2 tests, 0 failures

+ for K8S_TEST_ENTRY in ${K8S_TEST_UNION[@]}
+ bats k8s-sysctls.bats
 ✓ Setting sysctl

1 test, 0 failures

+ for K8S_TEST_ENTRY in ${K8S_TEST_UNION[@]}
+ bats k8s-number-cpus.bats
 ✓ Check number of cpus

1 test, 0 failures

+ for K8S_TEST_ENTRY in ${K8S_TEST_UNION[@]}
+ bats k8s-volume.bats
 ✓ Create Persistent Volume

1 test, 0 failures

+ for K8S_TEST_ENTRY in ${K8S_TEST_UNION[@]}
+ bats k8s-hugepages.bats
   Hugepages and sandbox cgroup                                                                                                                   2/2
2 tests, 0 failures, 2 not run

+ popd
~/go/src/github.com/kata-containers/tests
+ /root/go/src/github.com/kata-containers/tests/integration/kubernetes/cleanup_env.sh
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks
[reset] Removing info for node "ubuntu" from the ConfigMap "kubeadm-config" in the "kube-system" Namespace
W0122 23:12:00.039727   31772 removeetcdmember.go:61] [reset] failed to remove etcd member: error syncing endpoints with etc: etcdclient: no available endpoints
.Please manually remove this etcd member using etcdctl
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
INFO: Runtime environment not found - setting defaults

@devimc
Copy link
Copy Markdown

devimc commented Jan 22, 2020

@darfux ok, let me restart it again

@darfux
Copy link
Copy Markdown
Contributor Author

darfux commented Jan 23, 2020

Still failed on sysctl..., I'll try to reproduce it at first.

@darfux
Copy link
Copy Markdown
Contributor Author

darfux commented Jan 30, 2020

Hi @devimc , I have found two issues that are related to the case fail probably.

  1. The agent should remove dev from pciDeviceMap when device is unplugged. I have raised a PR agent: Remove dev from pciDeviceMap when device is unplugged agent#730
  2. The runtime (maybe)wrongly hotplugs the loop devices of host as block devices when starting a privileged container. This causes the agent to handle tons of devices and makes a race condition that produces the first issue usually. I have raised an issue Loop devices are wrongly hotplugged when using privileged containers  #2426 , please feel free to close it if such behavior works as expected :)

These issues cause the second container of sysctl case failed to start at first time. But in my previous test, the env is fast enough to recover from CrashLoopBackOff after restarting the second container before timeout, so I could pass the test case in my env. And in CI env with lower performance, it's not able to recover in 30s , so the case fails always, I guess.

@devimc
Copy link
Copy Markdown

devimc commented Jan 30, 2020

@darfux thanks for debugging this, let's merge kata-containers/agent#730 and try again

@devimc
Copy link
Copy Markdown

devimc commented Feb 7, 2020

@darfux jfyi jenkins-ci-cri-containerd runs with sandbox_cgroup_only=true in the configuration file

@grahamwhaley
Copy link
Copy Markdown
Contributor

I've unblocked kata-containers/agent#730 - let's keep an eye on that to check when we can progress this....

@darfux
Copy link
Copy Markdown
Contributor Author

darfux commented Mar 2, 2020

I've unblocked kata-containers/agent#730 - let's keep an eye on that to check when we can progress this....

Thanks @devimc for merging the PR. I think we can retry the cri-containerd case now 👻

@devimc
Copy link
Copy Markdown

devimc commented Mar 2, 2020

/test

This allows to reuse detached block index and ensures that the
index will not reach the limit of device(such as `maxSCSIDevices`)
after restarting containers many times in one pod.

Fixes: kata-containers#2007
Signed-off-by: Li Yuxuan <liyuxuan04@baidu.com>
@darfux darfux force-pushed the use_block_index_map_instead branch from 9cfe120 to e9a4658 Compare March 3, 2020 02:33
@darfux
Copy link
Copy Markdown
Contributor Author

darfux commented Mar 3, 2020

Persist tests failed to compile due to GetDriver() change.... I've rebased

@devimc
Copy link
Copy Markdown

devimc commented Mar 3, 2020

/test

@darfux
Copy link
Copy Markdown
Contributor Author

darfux commented Mar 3, 2020

Thanks @devimc . The containerd CI finally passes now 🎉 And the remain failed cases were failed on curl or rootfs creating, could you plz restart them again 😉

@devimc
Copy link
Copy Markdown

devimc commented Mar 3, 2020

@darfux jobs restarted

@darfux
Copy link
Copy Markdown
Contributor Author

darfux commented Mar 3, 2020

Both podman and initrd cases now failed on Build QEMU with:

 ERROR: User requested feature libpmem
        configure was not able to find it.
        Install nvml or pmdk

/cc @devimc. Does it a test configuration issue?

@devimc
Copy link
Copy Markdown

devimc commented Mar 3, 2020

@darfux my bad, fixing right away

@devimc
Copy link
Copy Markdown

devimc commented Mar 4, 2020

restarting podman and initrd CIs

Copy link
Copy Markdown

@devimc devimc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks @darfux - I left some questions

currentIndex := -1
for i := 0; i < maxBlockIndex; i++ {
if _, ok := s.state.BlockIndexMap[i]; !ok {
currentIndex = i
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so, if BlockIndexMap is empty , currentIndex will be set to 0, but in the next lines you are comparing currentIndex == -1 ? is that correct?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks @devimc for the questions . This loop aims to find the first unused index from the index map. If BlockIndexMap is empty, then currentIndex will be the first location of bitmap with value 0. If map[0] to map[maxBlockIndex-1] are all marked as used, it means that there is no more index in the bitmap can be reused, and currentIndex will not be assigned but leave with -1. Then we throw an error in such condition.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks @darfux , that make sense

currentIndex := s.state.BlockIndex
var err error
currentIndex := -1
for i := 0; i < maxBlockIndex; i++ {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how about changing this to:

bmLen := len(s.state.BlockIndexMap)
if bmLen > 0 {
	currentIndex = bmLen + 1
}
if currentIndex == -1 {
	return -1, errors.New("no available block index")
}

@devimc devimc merged commit 83eef43 into kata-containers:master Mar 4, 2020
@darfux darfux deleted the use_block_index_map_instead branch March 6, 2020 01:57
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Use a map for BlockIndex

9 participants