Conversation
828ed67 to
7dba0c6
Compare
d2bf2d5 to
ef20876
Compare
ef20876 to
57f3c85
Compare
darkowlzz
left a comment
There was a problem hiding this comment.
Looks great. Left a few comments.
pkg/storageos/deploy.go
Outdated
| configmapName = "storageos-node-config" | ||
| csiHelperName = "storageos-csi-helper" | ||
| apiManagerName = "storageos-api-manager" | ||
| apiManagerMetricsName = "storageos-api-manager-metrics" |
There was a problem hiding this comment.
It'll be useful to move these names above, along with the other exported variables and use them in the test, avoiding writing raw string names of these components in the tests.
There was a problem hiding this comment.
Moved to api-manager.go and exported, tests updated.
test/e2e/util/cluster.go
Outdated
| func APIManagerDeploymentTest(t *testing.T, ns string, retryInterval, timeout time.Duration) { | ||
| f := framework.Global | ||
| var deployment *appsv1.Deployment | ||
| err := wait.Poll(retryInterval, timeout, func() (done bool, err error) { |
There was a problem hiding this comment.
There's e2eutil.WaitForDeployment() from the operator-sdk test framework for waiting for Deployment to be ready with a given number of replicas. Can that be used here instead?
test/e2e/util/cluster.go
Outdated
| if err != nil { | ||
| return false, err | ||
| } | ||
| // TODO: wait for available? It's timing out waiting atm. |
There was a problem hiding this comment.
I think this is failing because there's no storageos/api-manager:v0.1.0 published yet. Maybe we should publish a develop image in these tests by setting it in the ClusterConfiguration of the test cluster.
test/e2e/util/cluster.go
Outdated
|
|
||
| // check label needed for metrics service. | ||
| for _, pod := range pods.Items { | ||
| label := "app.kubernetes.io/component" |
There was a problem hiding this comment.
This can be replaced with AppComponent from pkg/util/k8s.
| // getServiceMonitor returns a generic ServiceMonitor object. | ||
| func getServiceMonitor(name, namespace string, labels map[string]string, annotations map[string]string, svc *corev1.Service) *monitoringv1.ServiceMonitor { | ||
| boolTrue := true | ||
| ownerRef := metav1.OwnerReference{ |
There was a problem hiding this comment.
Maybe we should have a comment here explaining that this helps in cleanup, we want the SM to be deleted when the associated Service is deleted.
| client := fake.NewFakeClient() | ||
| s := scheme.Scheme | ||
| for _, f := range []func(*runtime.Scheme) error{ | ||
| monitoringv1.AddToScheme, |
There was a problem hiding this comment.
We should be able to make the above CSIDriver test case pass by importing
storagev1beta1 "k8s.io/api/storage/v1beta1"
and adding storagev1beta1.AddToScheme here.
I don't remember why it wasn't done before 😄
thanks.
There was a problem hiding this comment.
Added & confirmed it fixed the CSI test.
pkg/storageos/deploy_test.go
Outdated
|
|
||
| // Check the pod template component label (required for service). | ||
| label := "app.kubernetes.io/component" | ||
| want := "storageos-api-manager" |
There was a problem hiding this comment.
These can be replaced with constants from other packages.
pkg/storageos/deploy_test.go
Outdated
| t.Errorf("expected svc label %q not set", label) | ||
| } | ||
| if ok && got != want { | ||
| t.Errorf("expected svc label %q set to %q, want %q", label, got, want) |
internal/pkg/image/image.go
Outdated
| CSIv0ExternalProvisionerContainerImage = "storageos/csi-provisioner:v0.4.3" | ||
| CSIv0ExternalAttacherContainerImage = "quay.io/k8scsi/csi-attacher:v0.4.2" | ||
| DefaultNFSContainerImage = "storageos/nfs:1.0.0" | ||
| DefaultAPIManagerImage = "storageos/api-manager:v0.1.0" |
There was a problem hiding this comment.
Maybe set it to develop tag for now, until we have a versioned release. We can update this along with the node container before release.
There was a problem hiding this comment.
Pushed storageos/api-manager:develop and changed the default.
No description provided.