Skip to content

Add docs for kops, explain when kops and when kubeadm#1507

Merged
justinsb merged 1 commit intokubernetes:masterfrom
justinsb:kops_docs
Oct 26, 2016
Merged

Add docs for kops, explain when kops and when kubeadm#1507
justinsb merged 1 commit intokubernetes:masterfrom
justinsb:kops_docs

Conversation

@justinsb
Copy link
Copy Markdown
Member

@justinsb justinsb commented Oct 19, 2016

Quick getting started guide for kops.

Also try to provide some guidance as to when to use kops and when
kubeadm; based on discussions with sig-cluster-lifecycle that kubeadm is
a building block and not a provisioning tool.

I used "kubeadm" as a simplifying concept for "kubeadm and the other
work done by sig-cluster-lifecycle that kubeadm is a part of", in that
tools that don't (yet) use kubeadm are still leveraging the kubeadm
stream of work.


This change is Reviewable

@mikedanese
Copy link
Copy Markdown
Member

@kubernetes/sig-cluster-lifecycle

Copy link
Copy Markdown
Contributor

@jbeda jbeda left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is great to document kops. I don't agree with many of the changes for how kubeadm is positioned here. I think the alpha label is enough to make people read the caveats.

Comment thread docs/getting-started-guides/kops.md Outdated
but not under `example.com`).

Let's assume you're using `dev.example.com` as your hosted zone. You create that hosted zone using
the [normal process](http://docs.aws.amazon.com/Route53/latest/DeveloperGuide/CreatingNewSubdomain.html),
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this be done by the AWS CLI? If so, that might be best as you assume that folks are using it in the next step.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

People have trouble with this step, so it's much easier to point them to the official docs and get them into the control panel, IMO.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there value in providing both?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, seeing as you both want it, I'll add it :-)

that you have created, along with their configuration, the keys they are using etc. This information is stored
in an S3 bucket. S3 permissions are used to control access to the bucket.

Multiple clusters can use the same S3 bucket, and you can share an S3 bucket between your colleagues that
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You should note the security implications of this. Any one with access to this state store will have "root" on your cluster.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good call!


Run "kops create cluster" to create your cluster configuration:

`kops create cluster --zones=us-east-1c useast1.dev.example.com`
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is confusing that this doesn't actually start VMs. Might want to clarify that this is just configuration and doesn't actually start machines.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good call - will fix

Comment thread docs/getting-started-guides/kubeadm.md Outdated

This quickstart shows you how to easily install a secure Kubernetes cluster on machines running Ubuntu 16.04 or CentOS 7.
The installation uses a tool called `kubeadm` which is part of Kubernetes 1.4.
The installation uses a tool called `kubeadm` which is alpha as of Kubernetes 1.4.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have "alpha" in bold below. I don't think we need it here too.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I was trying to side-step the issue of what does it mean for something to be "part of" kubernetes 1.4 vs not. I'll revert as this is a huge hairball...

Comment thread docs/getting-started-guides/kubeadm.md Outdated
It is simple enough that you can easily integrate its use into your own automation (Terraform, Chef, Puppet, etc).

**The `kubeadm` tool is currently in alpha but please try it out and give us [feedback](/docs/getting-started-guides/kubeadm/#feedback)!**
kubeadm is not a complete provisioning tool; it is designed as a part of a larger provisioning
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"provisioning" is a really squishy word. I would say something like this and merge it with the paragraph above:

"""
kubeadm doesn't do all the work to create a cluster from scratch. Instead, it is assumed that the user already has a set of machines (virtual or real) that are up and running. A combination of native Linux distribution packages and container images are used to bootstrap Kubernetes on those machines. kubeadm is built to be able to integrate easily with other lower level automation (Terraform, Chef, Puppet, etc.).
"""

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem I'm trying to solve is that users are genuinely confused about when they should use kubeadm, and when they should use systems like GKE or kops.

I don't mean to denigrate kubeadm, I think there's actually a clear distinction to be drawn:

  • If you want to build-your-own, use kubeadm and integrate with your systems
  • If you just want a working kubernetes cluster on GCE, and can live with the opinions, you should probably use GKE
  • If you just want a working kubernetes cluster on AWS, and can live with the opinions, you should probably use kops
  • If you can't live with either of these GKE/kops restrictions, you should use kubeadm.

I don't want to confuse people with details of the how at this point, so I merged our paragraphs to be:

kubeadm assumes you have a set of machines (virtual or real) that are up and running.  It is designed
to be part of a larger provisioning system - or just for easy manual provisioning.  kubeadm is a great
choice where you have your own infrastructure (e.g. bare metal), or where you have an existing
orchestration system (e.g. Puppet) that you have to integrate with.

own infrastructure (e.g. bare metal), or where you have an existing orchestration system (e.g. Puppet)
that you have to integrate with.

If you are not constrained, other tools build on kubeadm to give you complete clusters:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we need to thread this through here. We have a navigation bar on the left that lists other choices. As things expand we won't want to update references all over the place and keep this stuff up to date. For example, if kops moves beyond just AWS, we'd have to come update this doc.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, but we need a way to draw the distinction as users are confused.

Comment thread docs/getting-started-guides/kubeadm.md Outdated
* On AWS, [kops](kops) makes installation and cluster management easy (and supports high availability)

**The `kubeadm` tool is currently in alpha but please try it out and give us [feedback](/docs/getting-started-guides/kubeadm/#feedback)!
Be sure to read the [limitations](#limitations); in particular note that kubeadm is not currently recommended
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm cool with pointing people at the limitation section below.

But I wouldn't say that it isn't recommended for GCE or AWS -- it is just more leg work to get it working. I'd rather document how to enable cloud providers than waive people off. We do have the --cloud-provider flag but haven't really documented how to use it well yet.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, the problem is that I'm seeing a lot of users with failed kubeadm installations on AWS, many of whom didn't understand that it wasn't sufficient to simply set the --cloudprovider flag.

Is there a way you can suggest to rephrase @jbeda ?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd rather fix up the documentation and teach people how to do AWS integration ("the hard way") rather than just waive them off. As time moves on we can build stuff in to kubeadm as needed and create clear separation.

If we don't have time to do this -- we can say something like "kubeadm doesn't have great support for automatically configuring cloud providers. Please refer to the specific cloud provider documentation or use another provisioning system".

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like that! And we can remove/repoint once those AWS/kubeadm docs are there.

Comment thread docs/index.md Outdated
<h3>Installing Kubernetes on Linux with kubeadm</h3>
<p>This quickstart will show you how to install a secure Kubernetes cluster on any computers running Linux, using a tool called <code>kubeadm</code> which is part of Kubernetes. It'll work with local VMs, physical servers and/or cloud servers, either manually or as part of your own automation. It is currently in alpha but please try it out and give us feedback!</p>
<p>This quickstart will show you how to install a secure Kubernetes cluster on any computers running Linux, using a tool called <code>kubeadm</code>. It'll work with local VMs, physical servers and/or cloud servers, either manually or as a part of your own automation. It is currently in alpha but please try it out and give us feedback!</p>
<p>If you are looking for a complete production cluster, note that kubeadm is intended as a building block. Tools such as GKE and kops build on kubeadm to provision a complete cluster.</p>
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that the "alpha" label does enough to warn folks agains production usage. And we don't want to brand kubeadm as a "toy".

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My main change here was to add the second paragraph. I called it a "building block", not a "toy", which I believe is the correct positioning. I do not imply that kubeadm is unsafe for production. The intention is to say "kubeadm is a building block that you can build up into a complete solution, here are two opinionated solutions if you just want a k8s cluster now". Perhaps if I removed the word "production" from "complete production cluster" - but then "If you are looking for a complete cluster...." read awkwardly. Equally I felt "If you are looking for a production-ready cluster" would imply that kubeadm was not production-ready. Suggestions?

The reason the first paragraph came up in the diff is because I removed "which is part of Kubernetes", because

  1. I didn't think it read very well
  2. It certainly wouldn't have read very well if we then mentioned other tools. It would beg the question "Is GKE part of Kubernetes?" which is a real hairball ;-)
  3. The page layout was awkward anyway. Personally I think we'll have to shorten this whole section anyway, but I thought this would be uncontroversial...

If you do care, I can put back "which is part of Kubernetes"

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about we replace "If you are looking for a complete production cluster" with "If you are looking for a fully automated solution" here?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great suggestion! I like that way of describing it also "fully automated" because it is both non-judgemental, but clearly draws the distinction. And to some people "fully automated" is good, to some people it is bad, and the interpretation probably maps well to the direction people should go.

@justinsb
Copy link
Copy Markdown
Member Author

Changes made per-review - thanks for the input @jbeda :-)

Copy link
Copy Markdown
Contributor

@chrislovecnm chrislovecnm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great!


## Feedback

* Slack Channel: [#sig-aws](https://kubernetes.slack.com/messages/sig-aws/) has a lot of kops users
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want sig aws or cluster-lifecycle? I am thinking lifecycle.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cluster-lifecycle seems more fitting, but sig-aws really is where the activity is... either makes sense for now but that might change once kops is viable for other providers?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that makes sense. When/if we find that the kops discussion is primarily about kops & lifecycle, and not about kops & k8s & AWS then we can revisit.

If this is your first time using kops, do spend a few minutes to try those out! An instance group is a
set of instances, which will be registered as kubernetes nodes. On AWS this is implemented via auto-scaling-groups.
You can have several instance groups, for example if you wanted nodes that are a mix of spot and on-demand instances, or
GPU and non-GPU instances.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it'd be helpful to cover the default instance sizing and the minimum recommended for masters and nodes. I know there are a lot of parameters covered in the kops docs but this is something that can cause a cluster to have problems right away.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should point then to the kops docs for that. This is a general quick start guide. HA, sizing, spot instances - all that fun should be housed in the kops repo.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How does that sound?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jaygorrell if the defaults don't work out of the box, do let me know as we should fix the defaults. I would like to keep the details out of this doc (and create a separate doc for those details - I feel we have reasonable docs on most kops features, but it is really hard to discover them). One thing that maybe does belong in this doc is things that can't be easily changed, but we don't have that doc yet anyway.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's fair - I only mention it because of the suggestion that the reader spend some time playing with IGs, since the settings most people would play around with would involve cheap clusters to experiment. Leaving the details out here makes sense in general though of course.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a good point though... I added a paragraph that tries to provide some details by assimilation...

As an example: the nodes instance group defaults to t2.medium instances, but you might want to change this to an m4 or c4 class instance type for production usage, or you may want to use spot instances (by setting maxPrice).

(A snafu here is that us-east-2 doesn't support the m3.mediums we use elsewhere, so the defaults can get a little complicated...)

@justinsb
Copy link
Copy Markdown
Member Author

Made the rest of the fixes per code review

@justinsb
Copy link
Copy Markdown
Member Author

@jbeda are you OK with this now? You have a big red X by your review :-)

@jbeda
Copy link
Copy Markdown
Contributor

jbeda commented Oct 22, 2016

I can live with this :) Thanks!

@luxas
Copy link
Copy Markdown
Member

luxas commented Oct 24, 2016

@mikedanese can merge this one when @justinsb has rebased

Adding the labels since it seems like everything's LGTM'd

@justinsb
Copy link
Copy Markdown
Member Author

Rebased & squashed.

@luxas
Copy link
Copy Markdown
Member

luxas commented Oct 25, 2016

@mikedanese Any comments or can I merge this?

@mikedanese
Copy link
Copy Markdown
Member

/lgtm

@chrislovecnm
Copy link
Copy Markdown
Contributor

WTH does this still show as conflicting. This new feature from github is weird.

Quick getting started guide for kops.

Also try to provide some guidance as to when to use kops and when
kubeadm; based on discussions with sig-cluster-lifecycle that kubeadm is
a building block and not a provisioning tool.

I used "kubeadm" as a simplifying concept for "kubeadm and the other
work done by sig-cluster-lifecycle that kubeadm is a part of", in that
tools that don't (yet) use kubeadm are still leveraging the kubeadm
stream of work.
@justinsb justinsb merged commit bfd03fe into kubernetes:master Oct 26, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants