pkg/apis/rbac: Add Openshift authorization API types#24900
pkg/apis/rbac: Add Openshift authorization API types#24900k8s-github-robot merged 1 commit intokubernetes:masterfrom
Conversation
|
@smarterclayton @liggitt @deads2k is "RBAC" the right name for OpenShift v3 Authorization? I thought RBAC was the name for what you put in Openshift v2. (https://lists.openshift.redhat.com/openshift-archives/dev/2013-September/msg00037.html). To me, RBAC means fixed roles, where as ABAC or some other name implies that the user can define their own roles based on policies that refer to the attributes of requests. So, I wonder if there is a better name. |
|
FYI, this or a future PR will need to include the following:
|
|
Will take a closer look tomorrow. |
I don't know that you need them. Without them the rules are verbose, but easily understandable. I'd leave them out. |
I'd never thought of rbac as implying fixed roles... interesting. Since the API includes role definitions, and users are granted roles, I think of it more in terms of rbac, but I don't feel that strongly if there's a more descriptive name. |
|
I do think it would make sense to nest under the |
There was a problem hiding this comment.
subresources are represented as <resource>/<subresource>
There was a problem hiding this comment.
subresources are represented as /
Shall we continue our long running argument here? Sometimes this works really well: namespace/status or some such. Sometimes it doesn't work out so well: HPA needs to be able to call */scale. Long term, I'd like to see it split out.
The change is largely compatible.
|
@ericchiang What is the next step after this? Looks like it is mergeable now that the groupName is rbac.authorization.k8s.io? |
|
@philips it's in the merge queue :) Next step is to add the API group implementation to the apiserver and kube client. |
It's tagged for merge, so it'll go in when the kube queue gets there. I think you'll be able to build out the storage and the policy evaluator in parallel. The storage is non-trivial since you need to be able to control who can bind which roles. It's not as simple as just using the generics. |
|
@k8s-bot test this [submit-queue is verifying that this PR is safe to merge] |
|
GCE e2e build/test passed for commit 6a1f468. |
|
Automatic merge from submit-queue |
|
@deads2k I think you are saying the storage layer needs to have a "I can't grant what I can't do myself" check? If that is what you mean, I think complexity in the storage layer could be avoided as follows:
Haven't thought it through all the way, but basing it on some experience with another system. |
|
I'd want the default to be that you can't grant rights you don't have. Otherwise, a "namespace admin" could create a RoleBinding in their namespace to a "superuser" ClusterRole that elevates their permissions beyond the "namespace-admin" role that was assigned to them. |
Could this be expressed as "Users can't create or update a RoleBinding to reference a rule they aren't permitted to login as". Is there a way to get that information when dealing with the storage layer? Since since we don't store group information is there a way to get authentication info at that level? I'm also trying to think how this would work with bootstrapping users into roles and how'd we'd express turning off this restriction. For my working mental models, I've been imagining never giving users within a namespace the ability to change |
Users don't log in as roles, they are bound to them. It doesn't matter where the group information is stored, as long as it's available via the user.Info interface. The same evaluation would need to be done by an authorizer implementation (user.Info -> bound roles -> conferred permissions). The only difference is that for normal authorizations, we're checking one permission (can the user/groups in user.Info do X on Y). For role grants, we're checking the user/groups in user.Info have all the permissions in the role being granted.
We ended up with something like a
Letting namespace admins manage access to their own namespace is very very useful, and is a lot safer if you're sure they can't expand their permissions by doing so. |
Right. I'm just wondering where we'd exactly put this logic. With my limited knowledge I'd implement it as an admission controller, but when you say "the storage layer" does this mean "pkg/registry/rolebindings"? |
Yes, you can substitute any valid I think that this particular one could also be built into the strategy ( |
|
@liggitt I was thinking that, RoleBindings would only be able to refer to Roles, not ClusterRoles. But looking again, I see that comments you allow RoleBindings to reference ClusterRoles. Is that necessary? |
|
@erictune I think RoleBindings are allowed to refer to ClusterRoles so admins can define a role like "namespace role" once then refer to multiple times it within different namespaces. I've taken an initial crack at an implementation with the comments from @liggitt and @deads2k in #25634. Please take a look. I was thinking that we could define a flag on the API server like --authorization-initial-roles that could define role bindings which got around these checks for bootstrapping. |
That's what I thought. So, does it seem like
If these were different Kinds, or at least a flag on the object, I think it would allow the approach I suggested to work. In particular:
However, this still doesn't allow you to grant namespace admin A permission to grant ClusterRole X, (which is a type 2 ClusterRole), but not allow admin B to grant ClusterRole X. |
|
the separation of permissions (clusterrolebinding / role) from the scope they apply to (the clusterrolebinding / rolebinding) has seemed pretty clean to me. If we wanted to further restrict what clusterroles/roles a namespaced admin could bind, that could be useful, but I would see that building on unopinionated type definitions. |
|
So, I've been reading through this related to Keystone Authorization: I still think Keystone Authorization should go in as experimental for 1.3, as it adds significant functionality as is. But for 1.4, I think it can be further enhanced with rbac support using this rbac code rather then needing additional custom code. Looking through the code here, it almost does what would be needed via RoleBinding->Subjects But, Keystone doesn't usually authorize with any of those. It authorizes things by associating Users to Projects via Roles. But if we extended Subject Kind to support a "KeystoneRole", it could lookup the Keystone Roles provided in the UserExtra auth field and then the rbac system can apply permissions in a way similar to the way it normally is done in OpenStack services: The admin can create k8s roles that function similar to keystone roles. "admin", "member", etc. I think then basic multitenancy would work. Keystone Projects would be mapped to namespaces and isolated from each other, and projects would have a flexible way to set up rbac rules mapping to their existing keystone infrastructure. |
|
making the rbac authz evaluator keystone aware seems like crossing the streams. I also wonder if "what k8s roles does my keystone role have" is too many levels of indirection.
That separation would have to be done all the way down to the workloads running on the nodes... no privileged pods, no direct tenant access to nodes, etc. |
|
@liggitt yeah, it could be. Was just a thought. Yeah, initially, multitenant would be for a subset of cases where the cloud provider wants to provide all the users very quick/simple access to a shared pool of k8s resources, but doesn't want one project to be able to access resources of another. Features would have to be restricted like you said. But I think this use case would be quite valuable to a lot of folks. @zhouhaibing089 is already doing it on one of his systems, and I'd like to provide the same capabilities on a few clouds I maintain too. Clouds usually provide a way for tenants to launch their own dedicated k8s clusters (OpenStack Magnum for example) so if the user needs more features, like privileged pods, direct node access, etc, then they can still do that. Just with some more effort. An advanced form of multitenancy could be added in the far future, where users could add their own kubelets (via vms or bare metal) and get them tagged to a particular namespace(or set of namespaces) so they were only available there. Then they could still have privileged pods, direct access, etc and not need a separate k8s cluster for that, but that would require the kubelet <-> apiserver stuff to be standardized/hardened, the secrets api to restrict access to secrets not destined for the kubelet, and some other things. Maybe some day, or maybe its just not worth it. |
…nt-e2e Bug 1823494: Storage tests should more carefully select zones for testing Origin-commit: 980671dade33af9e71c73a38be404897dad373b0
This PR updates #23396 by adding the Openshift RBAC types to a new API group.
Changes from Openshift:
authorization.k8s.ioAPI group is namedEdit: API groups is namedrbac.authorization.openshift.comas we omitted the AccessReview stuff and that seemed to be the lest controversial based on conversations in #23396. Would be happy to change it if there's a dislike for the name.rbac, sorry misread the original thread.As discussed in #18762, creating a new API group is kind difficult right now and the documentation is very out of date. Got a little help from @soltysh but I'm sure I'm missing some things. Also still need to add validation and a RESTStorage registry interface. Hence "WIP".
Any initial comments welcome.
cc @erictune @deads2k @sym3tri @philips