This repository was archived by the owner on May 12, 2021. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 367
This repository was archived by the owner on May 12, 2021. It is now read-only.
minikube installation with kata-fc fails #1915
Copy link
Copy link
Closed
Labels
bugIncorrect behaviourIncorrect behaviourneeds-docsNeeds some new or updated documentationNeeds some new or updated documentationneeds-reviewNeeds to be assessed by the team.Needs to be assessed by the team.questionRequires an answerRequires an answerrelated/firecrackerFirecrackerFirecrackerrelated/k8sKubernetesKubernetes
Description
Description of problem
Following the installation guide for minikube, at least with a certain configuration, results in a test pod stuck in ContainerCreating.
psnively@oryx-pro:~|⇒ cd packaging/kata-deploy
psnively@oryx-pro:~/packaging/kata-deploy|master
⇒ minikube delete
! "minikube" cluster does not exist
! "minikube" profile does not exist
psnively@oryx-pro:~/packaging/kata-deploy|master
⇒ minikube start --bootstrapper=kubeadm --container-runtime=containerd --enable-default-cni --memory 16384 --network-plugin=cni --vm-driver kvm2 --feature-gates=RuntimeClass=true --cpus 4 --disk-size 50G --kubernetes-version 1.13.7
* minikube v1.2.0 on linux (amd64)
* Creating kvm2 VM (CPUs=4, Memory=16384MB, Disk=50000MB) ...
* Configuring environment for Kubernetes v1.13.7 on containerd 1.2.5
* Pulling images ...
* Launching Kubernetes ...
* Verifying: apiserver etcd scheduler controller
* Done! kubectl is now configured to use "minikube"
psnively@oryx-pro:~/packaging/kata-deploy|master
⇒ kubectl apply -f kata-rbac.yaml
serviceaccount/kata-label-node created
clusterrole.rbac.authorization.k8s.io/node-labeler created
clusterrolebinding.rbac.authorization.k8s.io/kata-label-node-rb created
psnively@oryx-pro:~/packaging/kata-deploy|master
⇒ kubectl apply -f kata-deploy.yaml
daemonset.apps/kata-deploy created
psnively@oryx-pro:~/packaging/kata-deploy|master
⇒ kubectl apply -f k8s-1.13/runtimeclass-crd.yaml
customresourcedefinition.apiextensions.k8s.io/runtimeclasses.node.k8s.io created
psnively@oryx-pro:~/packaging/kata-deploy|master
⇒ kubectl apply -f k8s-1.13/kata-fc-runtimeClass.yaml
runtimeclass.node.k8s.io/kata-fc created
psnively@oryx-pro:~/packaging/kata-deploy|master
⇒ kubectl apply -f examples/test-deploy-kata-fc.yaml
deployment.apps/php-apache-kata-fc created
service/php-apache-kata-fc created
psnively@oryx-pro:~/packaging/kata-deploy|master
⇒ kubectl get pods
NAME READY STATUS RESTARTS AGE
php-apache-kata-fc-6c6f484c4b-xlv89 0/1 ContainerCreating 0 17s
psnively@oryx-pro:~/packaging/kata-deploy|master
⇒ kubectl describe php-apache-kata-fc-6c6f484c4b-xlv89
error: the server doesn't have a resource type "php-apache-kata-fc-6c6f484c4b-xlv89"
psnively@oryx-pro:~/packaging/kata-deploy|master
⇒ kubectl describe pod/php-apache-kata-fc-6c6f484c4b-xlv89
Name: php-apache-kata-fc-6c6f484c4b-xlv89
Namespace: default
Priority: 0
Node: minikube/192.168.122.43
Start Time: Sun, 28 Jul 2019 17:57:30 -0400
Labels: pod-template-hash=6c6f484c4b
run=php-apache-kata-fc
Annotations: <none>
Status: Pending
IP:
Controlled By: ReplicaSet/php-apache-kata-fc-6c6f484c4b
Containers:
php-apache:
Container ID:
Image: k8s.gcr.io/hpa-example
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Requests:
cpu: 200m
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-g8kbn (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-g8kbn:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-g8kbn
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 44s default-scheduler Successfully assigned default/php-apache-kata-fc-6c6f484c4b-xlv89 to minikube
Warning FailedCreatePodSandBox 23s kubelet, minikube Failed create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container: failed to create containerd task: rootfs (/run/kata-containers/shared/containers/9b9a1fbccac7b764c12fed12c4b13d220171d4011be902d6e7460ed41818ee10/rootfs) does not exist: unknownExpected result
The test pod would run successfully.
Actual result
The pod is stuck per the above output.
You may be wondering, "Why Kubernetes 1.13.7 and containerd?" The answer is, that's the closest approximation to what we can get in GKE today.
I'm attaching the result of SSHing into minikube and sudo env "PATH=/opt/kata/bin:$PATH" /opt/kata/bin/kata-collect-data.sh > kata.log in case that helps.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
bugIncorrect behaviourIncorrect behaviourneeds-docsNeeds some new or updated documentationNeeds some new or updated documentationneeds-reviewNeeds to be assessed by the team.Needs to be assessed by the team.questionRequires an answerRequires an answerrelated/firecrackerFirecrackerFirecrackerrelated/k8sKubernetesKubernetes