Is your feature request related to a problem? Please describe.
zarf tools wait-for currently works by shelling out kubectl wait. kubetl wait is often good enough, however the UX leaves some room to be desired. Users have to know what the readiness condition or magic word is for certain resources. For instance deployments should be "available", while pods should be "ready". Other resources such as DaemonSets don't have a magic word and require waiting for a condition. With kstatus, we can assume readiness as a condition, which is likely what users would want 90+% of the time.
Additionally, kstatus waits for resources to be fully reconciled which has advantages. For instance, kstatus will wait for all pods in a deployment to be ready, and all previous pods to be terminated. kubectl wait will only wait for all the new pods to be ready.
Right now zarf tools wait-for does not accept an API version which is required by kstatus. zarf tools wait-for will be split into zarf tools wait-for-resource and zarf tools wait-for-network (#4550). zarf tools wait-for will always stay using kubectl wait, while zarf tools wait-for-resource will default to kstatus, but use kubectl wait when a specific condition is set.
Describe the behavior you'd like
- Given I have a deployment podinfo -n podinfo
- -When I run a wait-for-resource command with the API version and without a condition
zarf tools wait-for-resource apps/v1 Deployment podinfo -n podinfo
- Then The deployment is waited for with kstatus as the engine, and it waits for the resource to be ready
- Given I have a statefulset podinfo -n podinfo
- -When I run a wait-for-resource command with the API version, but ask for a specific condition
zarf tools wait-for-resource apps/v1 statefulset podinfo -n podinfo '{.status.availableReplicas}'=23
- Then The deployment is waited for with
kubectl wait as the engine and it waits for the specific condition.
- Given I have two pods matching the selector app=podinfo
- -When I run a wait command with the API version, no condition set, and a selector
zarf tools wait-for-resource v1 pod app=podinfo ready -n podinfo
- Then The deployment is waited for with
kstatus as the engine. While kstatus does not natively handle selectors, Zarf will preform the necessary processing.
Additional context
This will also help the next schema version, which will require that API version is set under .wait.cluster, and only use kubectl wait as the engine when condition is set.
Is your feature request related to a problem? Please describe.
zarf tools wait-forcurrently works by shelling outkubectl wait.kubetl waitis often good enough, however the UX leaves some room to be desired. Users have to know what the readiness condition or magic word is for certain resources. For instance deployments should be "available", while pods should be "ready". Other resources such as DaemonSets don't have a magic word and require waiting for a condition. With kstatus, we can assume readiness as a condition, which is likely what users would want 90+% of the time.Additionally, kstatus waits for resources to be fully reconciled which has advantages. For instance, kstatus will wait for all pods in a deployment to be ready, and all previous pods to be terminated.
kubectl waitwill only wait for all the new pods to be ready.Right now
zarf tools wait-fordoes not accept an API version which is required by kstatus.zarf tools wait-forwill be split intozarf tools wait-for-resourceandzarf tools wait-for-network(#4550).zarf tools wait-forwill always stay usingkubectl wait, whilezarf tools wait-for-resourcewill default to kstatus, but usekubectl waitwhen a specific condition is set.Describe the behavior you'd like
zarf tools wait-for-resource apps/v1 Deployment podinfo -n podinfozarf tools wait-for-resource apps/v1 statefulset podinfo -n podinfo '{.status.availableReplicas}'=23kubectl waitas the engine and it waits for the specific condition.zarf tools wait-for-resource v1 pod app=podinfo ready -n podinfokstatusas the engine. While kstatus does not natively handle selectors, Zarf will preform the necessary processing.Additional context
This will also help the next schema version, which will require that API version is set under
.wait.cluster, and only usekubectl waitas the engine when condition is set.