-
Notifications
You must be signed in to change notification settings - Fork 4.7k
PV creation fails on openstack #9893
Description
1. What kops version are you running? The command kops version, will display
this information.
Version 1.18.0 (git-698bf974d8)
2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.
v1.19.0
3. What cloud provider are you using?
openstack
4. What commands did you run? What is the simplest way to reproduce this issue?
kops create cluster --cloud openstack
--name my-cluster.k8s.local
--state ${KOPS_STATE_STORE}
--zones nova
--network-cidr 10.0.0.0/24
--image Ubuntu-20.04-focal
--master-count=1
--node-count=1
--node-size 8124120
--master-size 8124120
--etcd-storage-type default
--topology private
--bastion
--networking calico
--api-loadbalancer-type public
--ssh-public-key ~/sshkey.pub
--os-ext-net ext-net-vlan52
--os-kubelet-ignore-az=true
5. What happened after the commands executed?
The VMs get created on openstack, but the creation of the Persistent Volume fails with the following error message:
W0908 09:56:31.548562 10816 executor.go:128] error running task "Volume/a.etcd-main.my-cluster.k8s.local" (37s remaining to succeed): error creating PersistentVolume: error creating volume {20 nova map[KubernetesCluster:my-cluster.k8s.local k8s.io/etcd/main:a/a k8s.io/role/master:1] a.etcd-main.my-cluster.k8s.local default false}: Resource not found
6. What did you expect to happen?
The k8s cluster comes up
7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml to display your cluster manifest.
You may want to remove your cluster name and other sensitive information.
kind: Cluster
metadata:
creationTimestamp: "2020-09-08T16:46:21Z"
name: my-cluster.k8s.local
spec:
api:
loadBalancer:
type: Public
authorization:
rbac: {}
channel: stable
cloudConfig:
openstack:
blockStorage:
bs-version: v3
ignore-volume-az: true
loadbalancer:
floatingNetwork: ext-net-vlan52
floatingNetworkID: d286fe57-c256-4686-b00f-d45fe8d4eebe
method: ROUND_ROBIN
provider: haproxy
useOctavia: false
monitor:
delay: 1m
maxRetries: 3
timeout: 30s
router:
externalNetwork: ext-net-vlan52
cloudProvider: openstack
configBase: swift://ois/my-cluster.k8s.local
containerRuntime: docker
etcdClusters:
- cpuRequest: 200m
etcdMembers:
- instanceGroup: master-nova
name: a
volumeType: default
memoryRequest: 100Mi
name: main
version: 3.2.24
iam:
allowContainerRegistry: true
legacy: false
kubelet:
anonymousAuth: false
kubernetesApiAccess:
- 0.0.0.0/0
kubernetesVersion: 1.18.8
masterPublicName: api.my-cluster.k8s.local
networkCIDR: 10.0.0.0/24
networking:
calico:
majorVersion: v3
nonMasqueradeCIDR: 100.64.0.0/10
sshAccess:
- 0.0.0.0/0
subnets:
- cidr: 10.0.0.32/27
name: nova
type: Private
zone: nova
- cidr: 10.0.0.0/30
name: utility-nova
type: Utility
zone: nova
topology:
dns:
type: Public
masters: private
nodes: private
---
apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
8. Please run the commands with most verbose logging by adding the -v 10 flag.
Paste the logs into this report, or in a gist and provide the gist link here.
error running tasks: deadline exceeded executing task Volume/a.etcd-events.my-cluster.k8s.local. Example error: error creating PersistentVolume: error creating volume {20 nova map[KubernetesCluster:my-cluster.k8s.local k8s.io/etcd/events:a/a k8s.io/role/master:1] a.etcd-events.my-cluster.k8s.local default false}: Resource not found
9. Anything else do we need to know?
This is being run against an openstack cluster where kubespray seems to work. I am trying this out to evaluate if this is a better option, as it seems to offer a few more options out of the box.