Skip to content

pkg/apis/rbac: Add Openshift authorization API types#24900

Merged
k8s-github-robot merged 1 commit intokubernetes:masterfrom
ericchiang:rbac_types
May 12, 2016
Merged

pkg/apis/rbac: Add Openshift authorization API types#24900
k8s-github-robot merged 1 commit intokubernetes:masterfrom
ericchiang:rbac_types

Conversation

@ericchiang
Copy link
Copy Markdown
Contributor

@ericchiang ericchiang commented Apr 28, 2016

This PR updates #23396 by adding the Openshift RBAC types to a new API group.

Changes from Openshift:

  • Omission of ResourceGroups as most of these were Openshift specific. Would like to add the concept back in for a later release of the API.
  • Omission of IsPersonalSubjectAccessReview as its implementation relied on Openshift capability.
  • Omission of SubjectAccessReview and ResourceAccessReview types. These are defined in authorization.k8s.io

API group is named rbac.authorization.openshift.com as we omitted the AccessReview stuff and that seemed to be the lest controversial based on conversations in #23396. Would be happy to change it if there's a dislike for the name. Edit: API groups is named rbac, sorry misread the original thread.

As discussed in #18762, creating a new API group is kind difficult right now and the documentation is very out of date. Got a little help from @soltysh but I'm sure I'm missing some things. Also still need to add validation and a RESTStorage registry interface. Hence "WIP".

Any initial comments welcome.

cc @erictune @deads2k @sym3tri @philips

@k8s-github-robot k8s-github-robot added kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API kind/new-api size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. labels Apr 28, 2016
@erictune
Copy link
Copy Markdown
Contributor

@smarterclayton @liggitt @deads2k is "RBAC" the right name for OpenShift v3 Authorization? I thought RBAC was the name for what you put in Openshift v2. (https://lists.openshift.redhat.com/openshift-archives/dev/2013-September/msg00037.html). To me, RBAC means fixed roles, where as ABAC or some other name implies that the user can define their own roles based on policies that refer to the attributes of requests. So, I wonder if there is a better name.

@erictune
Copy link
Copy Markdown
Contributor

FYI, this or a future PR will need to include the following:

  • Add api group in cmd/kube-apiserver/app/server.go (similar to how batch group is added in ab8cfb9, though not quite the same because you only have one api group not two).
  • Install group in pkg/master/master.go
  • Add a client in pkg/client/unversioned/rbac.go, pkg/client/unversioned/policy.go, pkg/client/unversioned/policybinding.go and so on.

@erictune
Copy link
Copy Markdown
Contributor

Will take a closer look tomorrow.

@deads2k
Copy link
Copy Markdown
Contributor

deads2k commented Apr 28, 2016

Omission of ResourceGroups as most of these were Openshift specific. Would like to add the concept back in for a later release of the API.

I don't know that you need them. Without them the rules are verbose, but easily understandable. I'd leave them out.

Comment thread pkg/apis/rbac/install/install.go Outdated
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo

@liggitt
Copy link
Copy Markdown
Member

liggitt commented Apr 28, 2016

is "RBAC" the right name for OpenShift v3 Authorization? I thought RBAC was the name for what you put in Openshift v2. (https://lists.openshift.redhat.com/openshift-archives/dev/2013-September/msg00037.html). To me, RBAC means fixed roles, where as ABAC or some other name implies that the user can define their own roles based on policies that refer to the attributes of requests. So, I wonder if there is a better name.

I'd never thought of rbac as implying fixed roles... interesting. Since the API includes role definitions, and users are granted roles, I think of it more in terms of rbac, but I don't feel that strongly if there's a more descriptive name.

@liggitt
Copy link
Copy Markdown
Member

liggitt commented Apr 28, 2016

I do think it would make sense to nest under the authorization API group, rbac.authorization or similar

Comment thread pkg/apis/rbac/types.go Outdated
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

subresources are represented as <resource>/<subresource>

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

subresources are represented as /

Shall we continue our long running argument here? Sometimes this works really well: namespace/status or some such. Sometimes it doesn't work out so well: HPA needs to be able to call */scale. Long term, I'd like to see it split out.

The change is largely compatible.

@k8s-github-robot k8s-github-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Apr 29, 2016
@deads2k deads2k added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label May 11, 2016
@erictune erictune added the priority/backlog Higher priority than priority/awaiting-more-evidence. label May 11, 2016
@philips
Copy link
Copy Markdown
Contributor

philips commented May 12, 2016

@ericchiang What is the next step after this? Looks like it is mergeable now that the groupName is rbac.authorization.k8s.io?

@ericchiang
Copy link
Copy Markdown
Contributor Author

@philips it's in the merge queue :)

Next step is to add the API group implementation to the apiserver and kube client.

@deads2k
Copy link
Copy Markdown
Contributor

deads2k commented May 12, 2016

@ericchiang What is the next step after this? Looks like it is mergeable now that the groupName is rbac.authorization.k8s.io?

It's tagged for merge, so it'll go in when the kube queue gets there.

I think you'll be able to build out the storage and the policy evaluator in parallel. The storage is non-trivial since you need to be able to control who can bind which roles. It's not as simple as just using the generics.

@k8s-github-robot
Copy link
Copy Markdown

@k8s-bot test this [submit-queue is verifying that this PR is safe to merge]

@k8s-bot
Copy link
Copy Markdown

k8s-bot commented May 12, 2016

GCE e2e build/test passed for commit 6a1f468.

@k8s-github-robot
Copy link
Copy Markdown

Automatic merge from submit-queue

@k8s-github-robot k8s-github-robot merged commit f45bcc5 into kubernetes:master May 12, 2016
@erictune
Copy link
Copy Markdown
Contributor

erictune commented May 13, 2016

@deads2k I think you are saying the storage layer needs to have a "I can't grant what I can't do myself" check?

If that is what you mean, I think complexity in the storage layer could be avoided as follows:

  • bootstrap cluster with policy that limits which roles can Create/Update ClusterRoleBindings to a set of cluster-wide admins.
  • bootstrap cluster with policy that limits which roles can Create/Update (namespaced) Roles to a set of cluster-wide admins
    • cluster admins or controllers add carefully curated roles to namespaces, upon namespace creation, and those roles only have within-namespace powers.
    • if namespace users want to create totally custom roles, they need to do that through a proxy API/UI which implements a "can't grant what I can't do" check.
  • a "namespace role admin" can edit (namespaced) RoleBindings, but cannot create brand new roles.

Haven't thought it through all the way, but basing it on some experience with another system.

@liggitt
Copy link
Copy Markdown
Member

liggitt commented May 13, 2016

I'd want the default to be that you can't grant rights you don't have. Otherwise, a "namespace admin" could create a RoleBinding in their namespace to a "superuser" ClusterRole that elevates their permissions beyond the "namespace-admin" role that was assigned to them.

@ericchiang
Copy link
Copy Markdown
Contributor Author

I'd want the default to be that you can't grant rights you don't have. Otherwise, a "namespace admin" could create a RoleBinding in their namespace to a "superuser" ClusterRole that elevates their permissions beyond the "namespace-admin" role that was assigned to them.

Could this be expressed as "Users can't create or update a RoleBinding to reference a rule they aren't permitted to login as". Is there a way to get that information when dealing with the storage layer? Since since we don't store group information is there a way to get authentication info at that level?

I'm also trying to think how this would work with bootstrapping users into roles and how'd we'd express turning off this restriction.

For my working mental models, I've been imagining never giving users within a namespace the ability to change RoleBindings, only granting that privilege at the cluster level (e.g. a cluster admin grants permissions to teams), but I can understand that that doesn't work for all use cases.

@liggitt
Copy link
Copy Markdown
Member

liggitt commented May 13, 2016

Could this be expressed as "Users can't create or update a RoleBinding to reference a rule they aren't permitted to login as". Is there a way to get that information when dealing with the storage layer? Since since we don't store group information is there a way to get authentication info at that level?

Users don't log in as roles, they are bound to them. It doesn't matter where the group information is stored, as long as it's available via the user.Info interface. The same evaluation would need to be done by an authorizer implementation (user.Info -> bound roles -> conferred permissions). The only difference is that for normal authorizations, we're checking one permission (can the user/groups in user.Info do X on Y). For role grants, we're checking the user/groups in user.Info have all the permissions in the role being granted.

I'm also trying to think how this would work with bootstrapping users into roles and how'd we'd express turning off this restriction.

We ended up with something like a create(roleBinding RoleBinding, allowEscalation bool) function. The rest storage can ensure on startup that a role granting * on * exists and is bound to a specified superuser, using the allowEscalation=true option to skip the checks. That "allowEscalation" option isn't available when called via the API in OpenShift.

For my working mental models, I've been imagining never giving users within a namespace the ability to change RoleBindings, only granting that privilege at the cluster level (e.g. a cluster admin grants permissions to teams), but I can understand that that doesn't work for all use cases.

Letting namespace admins manage access to their own namespace is very very useful, and is a lot safer if you're sure they can't expand their permissions by doing so.

@ericchiang
Copy link
Copy Markdown
Contributor Author

Users don't log in as roles, they are bound to them. It doesn't matter where the group information is stored, as long as it's available via the user.Info interface. The same evaluation would need to be done by an authorizer implementation (user.Info -> bound roles -> conferred permissions). The only difference is that for normal authorizations, we're checking one permission (can the user/groups in user.Info do X on Y). For role grants, we're checking the user/groups in user.Info have all the permissions in the role being granted.

Right. I'm just wondering where we'd exactly put this logic. With my limited knowledge I'd implement it as an admission controller, but when you say "the storage layer" does this mean "pkg/registry/rolebindings"?

@deads2k
Copy link
Copy Markdown
Contributor

deads2k commented May 13, 2016

Right. I'm just wondering where we'd exactly put this logic. With my limited knowledge I'd implement it as an admission controller, but when you say "the storage layer" does this mean "pkg/registry/rolebindings"?

Yes, you can substitute any valid Creater, Updater, etc interface for your RESTStorage when you call the APIInstaller. It gives you a spot to make the extended logic work.

I think that this particular one could also be built into the strategy (ValidateUpdate) if you wanted.

@erictune
Copy link
Copy Markdown
Contributor

@liggitt I was thinking that, RoleBindings would only be able to refer to Roles, not ClusterRoles. But looking again, I see that comments you allow RoleBindings to reference ClusterRoles. Is that necessary?

@ericchiang
Copy link
Copy Markdown
Contributor Author

@erictune I think RoleBindings are allowed to refer to ClusterRoles so admins can define a role like "namespace role" once then refer to multiple times it within different namespaces.

I've taken an initial crack at an implementation with the comments from @liggitt and @deads2k in #25634. Please take a look.

I was thinking that we could define a flag on the API server like --authorization-initial-roles that could define role bindings which got around these checks for bootstrapping.

@erictune
Copy link
Copy Markdown
Contributor

@erictune I think RoleBindings are allowed to refer to ClusterRoles so admins can define a role like "namespace role" once then refer to multiple times it within different namespaces.

That's what I thought. So, does it seem like ClusterRole is being used for two different things:

  1. A ClusterRole that is created at Cluster namespace because it is intended to apply to all objects, regardless of namespace, and/or because it applies to un-namespaced objects.
  2. A ClusterRole, intended to grant permissions to namespaced objects, where the particular namespace is a "parameter".

If these were different Kinds, or at least a flag on the object, I think it would allow the approach I suggested to work. In particular:

  • ClusterRoles of type 1 are never granted to namespaced users, only cluster admins and controllers.
  • ClusterRoles of type 2 can viewed by anyone, and safely referenced by anyone with create/update permission on RoleBindings.
  • Don't put permissions of both type 1 and type 2 in the same ClusterRole.

However, this still doesn't allow you to grant namespace admin A permission to grant ClusterRole X, (which is a type 2 ClusterRole), but not allow admin B to grant ClusterRole X.

@liggitt
Copy link
Copy Markdown
Member

liggitt commented May 17, 2016

the separation of permissions (clusterrolebinding / role) from the scope they apply to (the clusterrolebinding / rolebinding) has seemed pretty clean to me. If we wanted to further restrict what clusterroles/roles a namespaced admin could bind, that could be useful, but I would see that building on unopinionated type definitions.

@kfox1111
Copy link
Copy Markdown

So, I've been reading through this related to Keystone Authorization:
#25624

I still think Keystone Authorization should go in as experimental for 1.3, as it adds significant functionality as is. But for 1.4, I think it can be further enhanced with rbac support using this rbac code rather then needing additional custom code.

Looking through the code here, it almost does what would be needed via RoleBinding->Subjects
But Subject Kind looks to be too restrictive at the moment.
It currently supports:
"User", "Group", and "ServiceAccount".

But, Keystone doesn't usually authorize with any of those. It authorizes things by associating Users to Projects via Roles.

But if we extended Subject Kind to support a "KeystoneRole", it could lookup the Keystone Roles provided in the UserExtra auth field and then the rbac system can apply permissions in a way similar to the way it normally is done in OpenStack services:
http://docs.openstack.org/kilo/config-reference/content/policy-json-file.html

The admin can create k8s roles that function similar to keystone roles. "admin", "member", etc.
and k8s RoleBindings that map a keystone role "member" to a k8s role "member" and a rule to map a keystone tenant to a namespace would need to get added somewhere too. Maybe that last part belongs in the Keystone AuthZ module.. not sure yet.

I think then basic multitenancy would work. Keystone Projects would be mapped to namespaces and isolated from each other, and projects would have a flexible way to set up rbac rules mapping to their existing keystone infrastructure.

@liggitt
Copy link
Copy Markdown
Member

liggitt commented May 20, 2016

making the rbac authz evaluator keystone aware seems like crossing the streams. I also wonder if "what k8s roles does my keystone role have" is too many levels of indirection.

I think then basic multitenancy would work. Keystone Projects would be mapped to namespaces and isolated from each other

That separation would have to be done all the way down to the workloads running on the nodes... no privileged pods, no direct tenant access to nodes, etc.

@kfox1111
Copy link
Copy Markdown

@liggitt yeah, it could be. Was just a thought.

Yeah, initially, multitenant would be for a subset of cases where the cloud provider wants to provide all the users very quick/simple access to a shared pool of k8s resources, but doesn't want one project to be able to access resources of another. Features would have to be restricted like you said. But I think this use case would be quite valuable to a lot of folks. @zhouhaibing089 is already doing it on one of his systems, and I'd like to provide the same capabilities on a few clouds I maintain too. Clouds usually provide a way for tenants to launch their own dedicated k8s clusters (OpenStack Magnum for example) so if the user needs more features, like privileged pods, direct node access, etc, then they can still do that. Just with some more effort.

An advanced form of multitenancy could be added in the far future, where users could add their own kubelets (via vms or bare metal) and get them tagged to a particular namespace(or set of namespaces) so they were only available there. Then they could still have privileged pods, direct access, etc and not need a separate k8s cluster for that, but that would require the kubelet <-> apiserver stuff to be standardized/hardened, the secrets api to restrict access to secrets not destined for the kubelet, and some other things. Maybe some day, or maybe its just not worth it.

@ericchiang ericchiang deleted the rbac_types branch May 31, 2016 23:48
openshift-publish-robot pushed a commit to openshift/kubernetes that referenced this pull request Apr 23, 2020
…nt-e2e

Bug 1823494: Storage tests should more carefully select zones for testing

Origin-commit: 980671dade33af9e71c73a38be404897dad373b0
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API lgtm "Looks good to me", indicates that a PR is ready to be merged. priority/backlog Higher priority than priority/awaiting-more-evidence. release-note-none Denotes a PR that doesn't merit a release note. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.