-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Enable a namespaced mode for the crossplane controller pod #6348
Description
What problem are you facing?
Note : I run a crossplane controller pod, without the rbac-manager sidecar.
For now, the crossplane controller asks a lot of cluster-wide permissions on kubernetes-native resources.
Crossplane output logs and crashes when it does not have cluster-wide list and watch permissions on deployments, serviceaccounts, services, and secrets. The Helm Chart also adds permissions on ConfigMaps, but the controller runs fine even without it.
The init container runs and completes just fine.
Current minimal required permissions for the controller to work properly (no logs, starts and runs fine)
Crossplane ClusterRole
---
# Cropped and reorganized ClusterRole, to display only relevant info for this issue
- apiGroups:
- ""
resources:
- serviceaccounts
- services
- secrets
verbs:
- list
- watch
- apiGroups:
- extensions
- apps
resources:
- deployments
verbs:
- list
- watchWhat if we remove these permissions ?
Explanation on why this feature is not yet supported
Then wee see the logs. Example:
2025-03-20T09:38:03Z ERROR crossplane Unhandled Error {"logger": "UnhandledError", "error": "pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:243: Failed to watch *v1.ServiceAccount: failed to list *v1.ServiceAccount: serviceaccounts is forbidden: User \"system:serviceaccount:test-crossplane-ns:crossplane\" cannot list resource \"serviceaccounts\" in API group \"\" at the cluster scope"}
k8s.io/client-go/tools/cache.(*Reflector).Run.func1
/go/pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:308
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1
/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:226
k8s.io/apimachinery/pkg/util/wait.BackoffUntil
/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/backoff.go:227
k8s.io/client-go/tools/cache.(*Reflector).Run
/go/pkg/mod/k8s.io/client-go@v0.31.2/tools/cache/reflector.go:306
k8s.io/client-go/tools/cache.(*controller).Run.(*Group).StartWithChannel.func2
/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/wait.go:55
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1
/go/pkg/mod/k8s.io/apimachinery@v0.31.2/pkg/util/wait/wait.go:72
The same logs appear for Secrets, Deployments, and Services. Also crossplane is not deployed in a crossplane-system namespace, but I don't think it's relevant.
Then, the crossplane pod crashes (enters CrashLoopBackOff) with the last log being :
crossplane: error: cannot start controller manager: failed to wait for usage/usage.apiextensions.crossplane.io caches to sync: timed out waiting for cache to be synced for Kind *v1beta1.Usage
It looks like all these errors cause too much stress on the cache controller and makes it crash. I don't know if it's a matter of performance / resources allocated to the pod, but it may certainly be a concern.
When removing the "watch" verb only, the logs are still produced but the pod doesn't crash. However, creating a Provider resource does not trigger anything.
Hence, we can say that the "namespaced" controller is not supported.
We would like to have this feature, because the list verb enable commands such as :
kubectl get secrets -n test-namespace-1 --as "system:serviceaccount:test-crossplane-ns:crossplane" --as-group "system:authenticated" -oyaml
Which returns the content of all the secrets of a namespace (even without get permissions).
How could Crossplane help solve your problem?
- Adapt the core controller's code to watch namespaced resources only in crossplane's namespace, not cluster-wide.
- Change the
crossplaneClusterRole generated by the Helm Chart in order to remove permissions on Services, ServiceAccounts, Secrets, Deployments and Configmaps. - Update the corresponding design doc to reflect these changes (if necessary ?)
I don't know if there are use cases with Crossplane for it to require the current permissions. If so :
- Add a
--namespacedoption to the controller and the Helm Chart in order for it to watch kubernetes-native resources only in its namespace
Additional notes
- The problematic resources are the ones native to kubernetes, not crossplane. I make this precision in the light of crossplane v2 where crossplane-related resources might get namespaced at some point.
- A similar issue exists here. I felt like it was not a bug and rather and enhancement so I created this issue, but let me know if this is a duplicate and if I should delete it.
- I can't add labels to this issue but it is related to security