Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: Assign the PR to them by writing The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these OWNERS Files:You can indicate your approval by writing |
| mountPath: /etc/statsd | ||
| - name: {{ template "mixer.name" . }} | ||
| image: "{{ .Values.global.mixer.repository }}:{{ .Values.global.mixer.tag }}" | ||
| image: "{{ .Values.global.hub }}/mixer:{{ .Values.global.tag }}" |
| - "2" | ||
| - --discoveryAddress | ||
| - {{ .Release.Name }}-pilot:15003 | ||
| - istio-pilot:15003 |
| heritage: {{ .Release.Service }} | ||
| istio: ingress | ||
| spec: | ||
| {{- if .Values.service.nodePort.enabled }} |
There was a problem hiding this comment.
the nodePort.enabled feature is to permit people to eval if they lack a load balancer. I think the current istio generated manifest has this nodeport feature commented out. I think the conditional is warranted.
There was a problem hiding this comment.
Super useful for minikube - but instead of conditional I added 2 services, one with node port
and one load balancer. This way we can have both.
| istio: ingress | ||
|
|
||
| --- | ||
| {{- if .Values.service.nodePort.enabled }} |
| # Common settings. | ||
| global: | ||
| # Default tag for Istio images. | ||
| hub: istio |
There was a problem hiding this comment.
is this hub correct? Wouldn't it be docker.io/istio or gcr.io/istio?
There was a problem hiding this comment.
docker.io is added automatically, and it seems to work.
There was a problem hiding this comment.
would be good to specify it out as I'm not sure it'd work when custom registry is used in various cloud provider. also for clarity :)
There was a problem hiding this comment.
Will do it in a separate PR, more changes will likely be needed before we change the tests.
| httpGet: | ||
| path: / | ||
| port: {{ .Values.service.internalPort }} | ||
| # livenessProbe: |
There was a problem hiding this comment.
commented out because it fails - 404. We need to add it back, but not sure what URL.
|
When I tested, I accidentally deployed to the istio namespace rather than the istio-system namespace which results in: I'll try istio-system. In the past, the helm charts have always deployed by default to the default namespace. IOW there may be a latent issue with the helm chart related to custom namespaces. Otherwise, things look good! |
|
Interesting point - we typically recommend using istio-system, and any
upgrade from the existing
template to the new ones would go there. The existing template has explicit
'namespace' statements,
I should probably add them as a conditional.
At the same time - there is no reason the template shouldn't work in the
default or other namespaces
(except it's not a recommended config). I suspect it's a 0.5.1 bug - I
tested with a '--set' option pointed
to master.
…On Fri, Feb 23, 2018 at 7:01 AM, Steven Dake ***@***.***> wrote:
When I tested, I accidentally deployed to the istio namespace rather than
the istio-system namespace which results in:
Error: failed to convert to proto. unknown field "mixerCheckServer" in v1alpha1.MeshConfig
***@***.***:~/go/src/istio.io/istio/samples/bookinfo/kube$ istioctl version
Version: 0.5.1
GitRevision: 30acfe6
User: ***@***.***
Hub: docker.io/istio
GolangVersion: go1.9
BuildStatus: Clean```
I'll try istio-system. In the past, the helm charts have always deployed by default to the default namespace. IOW there may be a latent issue with the helm chart related to custom namespaces. Otherwise, things look good!
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#3594 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AAFI6kUmEpthUWt5UbdlGSOKUCg8vMMGks5tXtLVgaJpZM4SJlMM>
.
|
|
Agreed it is likely a problem with the 0.51 images being out of sync with the manifests. I have just rebuilt latest master and will report back whether that works for me or not with Cheers |
sdake
left a comment
There was a problem hiding this comment.
I tested with master and a master version of istioctl using manual injection. That works well enough. I think some more sanitization and testing is in order over time, but this PR looks good to go from my POV.
| metadata: | ||
| labels: | ||
| app: {{ template "grafana.name" . }} | ||
| release: {{ .Release.Name }} |
There was a problem hiding this comment.
I think there is value to keep release label so users can easily query what makes up a helm release.
There was a problem hiding this comment.
This breaks upgrading from istio.yaml ( release / upgradeVersion ) to istio.yaml (helm ), which is
pretty important for now.
We can add annotations or other markers. I'm also not sure if the cost is worth it (k8s keeps indexes by label)
| name: {{ .Values.service.name }} | ||
| selector: | ||
| app: {{ template "grafana.name" . }} | ||
| release: {{ .Release.Name }} |
There was a problem hiding this comment.
Confirmed that the release/etc show up in the 'deployment' (as labels).
The change only removes them from the replica set and pods.
| labels: | ||
| app: {{ template "mixer.name" . }} | ||
| release: {{ .Release.Name }} | ||
| istio: mixer |
There was a problem hiding this comment.
I think these labels are helm convention you may want to keep.
There was a problem hiding this comment.
I tried. We can add them later ( but will break upgrade ).
k8s treats the label set as immutable for some strange reasons.
| labels: | ||
| app: istio-ca | ||
| spec: | ||
| type: LoadBalancer |
There was a problem hiding this comment.
do we always want to do loadbalancer type for this? I asked as istio ingress has a node port option.
There was a problem hiding this comment.
That's what we have in istio.yaml - we can add a node port as well, but separate PR :-)
| chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }} | ||
| release: {{ .Release.Name }} | ||
| heritage: {{ .Release.Service }} | ||
| istio: sidecar-injector |
| data: | ||
| mesh: |- | ||
| {{- if .Values.global.securityEnabled }} | ||
| {{- if .Values.global.mtlsDefault }} |
There was a problem hiding this comment.
could we call it mtlsEnabled? default is confusing as to on or off...
There was a problem hiding this comment.
Do you mind doing this in separate PR ? Don't want to go trough another wait :-)
There was a problem hiding this comment.
I had to rebase, so updated it.
| # create RBAC resources | ||
| # Default mtls policy. If true, mtls between services will be enabled by default. | ||
| mtlsDefault: false | ||
|
|
There was a problem hiding this comment.
I'm confused that we want to expose both securityEnabled and mtlsDefault or mtlsEnabled to our user... because you can't have mtlsEnabled = true while securityEnabled = false. Maybe having mtlEnabled as a child property under securityEnabled?
There was a problem hiding this comment.
Well, technically you can... SecurityEnabled means istioCA is installed. In theory you can have
another component generating certificates ( for example using vault or a native CA). Or use
some other spiffee thing.
|
|
||
| # create RBAC resources. Must be set for any cluster configured with rbac. | ||
| rbacEnabled: true | ||
| # mutual TLS authentication |
There was a problem hiding this comment.
for rbacEnabled... I'd assume it is only valid when security is enabled? and mixer is enabled today...
There was a problem hiding this comment.
AFAIK RBAC is required if the cluster has rbac ( and many have it on by default ).
RBAC only registers the cluster permissions so pilot can read stuff.
linsun
left a comment
There was a problem hiding this comment.
pending address of minor comments. Really good work Costin, I appreciate your effort so now we can generate istio yams from helm artifacts. :)
|
@costinm PR needs rebase |
|
PTAL - I addressed most of the comments, but also added prometheus change (to sync with the templates/addons config), and added the refresh setting for larger clusters. |
|
please be careful to not commit accidental/unwanted changes to vendor, please undo that change for the build to pass |
| if [ ! -f /usr/local/bin/helm -a ! -f ${ISTIO_OUT}/helm ] ; then | ||
| # Install helm. Please keep it in sync with .circleci | ||
| cd /tmp && \ | ||
| curl -Lo /tmp/helm.tgz https://storage.googleapis.com/kubernetes-helm/helm-${HELM_VER}-linux-amd64.tar.gz && \ |
There was a problem hiding this comment.
can we make this optional ? not everyone should have to install helm ? (only if you touch the templates,...)
not every build (lint, unit test, etc...) need helm either
There was a problem hiding this comment.
We plan to use helm to generate istio.yaml and test configs - so it won't be optional.
Also helm ( full helm, with tiller ) will need to be part of one of the integration tests.
There was a problem hiding this comment.
this is a bit risky though because helm can easily become required accidentally that way
it's also not hermetic, we should avoid downloading stuff
maybe the helm client can be vendored ? it should already have most of it as we use the libs in gen-deploy ?
in short I'm concerned it's really easy after that to break the "istio-auth.yaml" is all you need to install a basic istio (demo) setup - and pass all the tests
There was a problem hiding this comment.
it still shouldn't be required
|
/test istio-pilot-e2e |
Changes:
is more natural )
and the UX is pretty bad (user having to type the tag 7 times)
I did more manual diffs - they're very time consuming, fixed few things missing (ingress certs), but
we'll need more people and eyes to get them in sync and make sure nothing gets lost.