-
Notifications
You must be signed in to change notification settings - Fork 632
Description
Is there an existing issue already for this bug?
- I have searched for an existing issue, and could not find anything. I believe this is a new bug.
I have read the troubleshooting guide
- I have read the troubleshooting guide and I think this is a new bug.
I am running a supported version of CloudNativePG
- I have read the troubleshooting guide and I think this is a new bug.
Contact Details
No response
Version
1.25 (latest patch)
What version of Kubernetes are you using?
1.32
What is your Kubernetes environment?
Self-managed: kind (evaluation)
How did you install the operator?
YAML manifest
What happened?
barman-cloud-restore cannot parse the command line arguments.
Reproduction (uses kubectl to run the latest postgis image in the current cluster and imitate the restore command that cnpg uses):
kubectl run test --image ghcr.io/cloudnative-pg/postgis:15-3.5-78@sha256:19947c968e535e3ac74ee5772731b164a55d342dfadc1bd7fbbdfc7eec2ef7f3 --env POSTGRES_PASSWORD=password
kubectl exec -it test -- barman-cloud-restore source_url server_name backup_id --cloud-provider aws-s3 recovery_dir
I'd expect that this command parses fine but no, observe the output:
barman-cloud-restore: error: unrecognized arguments: recovery_dir
command terminated with exit code 3
The problem is caused by the optional argument (--cloud-provider).
If we remove that (or move it to the front), the parsing succeeds and we get the expected failure (ERROR: Barman cloud restore exception: Invalid s3 URL address: source_url).
One could claim that I'm calling barman-cloud-restore with the wrong order of arguments but apparently this is exactly what cloudnative-pg uses today.
I see that the relevant code was touched 5 months ago. Could it be that the refactoring caused this regression? How could it happen that nobody reported it yet?
Cluster resource
Relevant log output
Code of Conduct
- I agree to follow this project's Code of Conduct