Let front-proxy verify against non-fp client CA#178
Conversation
Signed-off-by: Nelo-T. Wallus <red.brush9525@fastmail.com> Signed-off-by: Nelo-T. Wallus <n.wallus@sap.com>
…undle Signed-off-by: Nelo-T. Wallus <red.brush9525@fastmail.com> Signed-off-by: Nelo-T. Wallus <n.wallus@sap.com>
There was a problem hiding this comment.
Pull request overview
This PR creates a merged client CA secret that concatenates the FrontProxyClientCA and the shard ClientCA, and mounts it for both internal and external front proxies. This allows the front proxy to accept client certificates signed by either CA, enabling kubeconfigs created against the root shard to also authenticate through the front proxy.
Changes:
- New
mergedClientCASecretReconcilerthat fetches both CA secrets, concatenates theirtls.crtdata, and stores the result in a new merged secret - Unified deployment volume mount and
--client-ca-fileargument for both internal (rootshard proxy) and external front proxy modes - Updated tests with the required CA secret fixtures
Reviewed changes
Copilot reviewed 7 out of 7 changed files in this pull request and generated 1 comment.
Show a summary per file
| File | Description |
|---|---|
| internal/resources/frontproxy/ca_bundle.go | New merged client CA secret reconciler and name helper |
| internal/resources/frontproxy/deployment.go | Unified client CA mount path and CLI arg for both proxy modes |
| internal/resources/frontproxy/reconciler.go | Register the new merged client CA reconciler |
| internal/resources/resources.go | New GetMergedClientCAName helper |
| internal/resources/frontproxy/deployment_test.go | Updated expected volume mount name |
| internal/controller/frontproxy/controller_test.go | Added CA secret fixtures for fake client |
| internal/controller/rootshard/controller_test.go | Added CA secret fixtures for fake client |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| return fmt.Sprintf("%s-merged-client-ca", r.frontProxy.Name) | ||
| } | ||
| return fmt.Sprintf("%s-proxy-merged-client-ca", r.rootShard.Name) |
There was a problem hiding this comment.
The mergedClientCASecretName() method duplicates the naming logic from resources.GetMergedClientCAName() instead of calling it. For the r.frontProxy != nil branch they happen to produce the same result, but for the rootShard proxy branch (r.frontProxy == nil), mergedClientCASecretName() returns "{rootShard.Name}-proxy-merged-client-ca" which is inconsistent with GetMergedClientCAName(). This means GetMergedClientCAName cannot be used for the rootShard proxy case, and the two functions could silently diverge.
Consider having mergedClientCASecretName() call resources.GetMergedClientCAName() with the appropriate owner name, or remove GetMergedClientCAName and keep the logic in one place.
| return fmt.Sprintf("%s-merged-client-ca", r.frontProxy.Name) | |
| } | |
| return fmt.Sprintf("%s-proxy-merged-client-ca", r.rootShard.Name) | |
| return resources.GetMergedClientCAName(r.frontProxy.Name) | |
| } | |
| // For the rootShard proxy case, the owner name includes the "-proxy" suffix | |
| // so that the resulting secret name remains "<rootShard.Name>-proxy-merged-client-ca". | |
| return resources.GetMergedClientCAName(fmt.Sprintf("%s-proxy", r.rootShard.Name)) |
There was a problem hiding this comment.
True but I think fine for now. We'll delete most of this anyhow when fixing the CA layout.
🐷 Certificate Trust Analysis SummaryWhat This PR DoesCreates a merged secret concatenating Does It Solve the Problem?Yes, for the immediate pain point. Kubeconfigs signed by the shard's Kubeconfig Resource InteractionThe
PR #178 enables the case where someone takes a shard-targeting kubeconfig and wants to reuse it against the front-proxy (auth now works, but URL still needs manual swap). Risks to Watch
Alternative Considered: Both CAs in Both DirectionsAlso discussed: mounting both CAs into shards too (so shards trust
Adding RecommendationPR #178 is a sound minimal bandaid. The "both directions" approach is cleaner for UX but crosses the architectural boundary — which may be acceptable since issue #164 proposes a single shared client CA long-term anyway. Both are bandaids until that redesign. Analysis by 🐷 via Slack thread discussion with @mjudeikis and @ntnn |
|
/lgtm |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: mjudeikis The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
LGTM label has been added. DetailsGit tree hash: bff0f1e6a58ce705ce7e5321ff5c1881364074a4 |
Summary
This PR adds another reconciler to create a merged front-proxy client CA and client CA secret.
This is then mounted for both the internal and external front proxies.
The internal doesn't need it, I just saw no reason to make a distinction there.
As noted in #164 the CA structure is a problem.
We are currently having a very hard time resolving a number of issues that almost resembles a gordian knot, partially because we cannot change architecture that must be changed because permissions break down, we cannot change permissions because what would enable changing permissions only exists on paper and any workaround is either very hacky or a lot of extra work with little to no gain and/or brittle at best.
This change simply allows the front proxy to also validate from the shard client CA. That means kubeconfigs created against the root shard can also talk to the front proxy.
It is a minimal bandaid for the problem until the CA layout can be changed later down the line.
I did not consider making shards accept the front proxy CA - the shards have no inherent knowledge of the front proxy and it should stay that way.
What Type of PR Is This?
/kind bug
Related Issue(s)
Fixes #
Release Notes