feat(syncer): add resource syncer#6595
Conversation
Signed-off-by: liubo02 <liubo02@pingcap.com>
Signed-off-by: liubo02 <liubo02@pingcap.com>
| "sigs.k8s.io/controller-runtime/pkg/cache" | ||
| "sigs.k8s.io/controller-runtime/pkg/healthz" | ||
| "sigs.k8s.io/controller-runtime/pkg/log/zap" | ||
|
|
cmd/resource-syncer/main.go
Outdated
| if err := run(&cfg); err != nil { | ||
| setupLog.Error(err, "unable to new manager") | ||
| os.Exit(1) | ||
|
|
There was a problem hiding this comment.
Pull request overview
This PR introduces a new resource syncer component designed to sync Kubernetes secrets into pod filesystems. The syncer runs as an init container that watches secrets with specific labels and synchronizes their data to a mounted volume.
Key Changes:
- Adds a new
resource-syncerbinary with a controller that watches and syncs secrets to the filesystem - Implements health and readiness checks that report ready once initial sync completes
- Updates build system (Makefile, Docker, scripts) to include the new resource-syncer component
Reviewed changes
Copilot reviewed 9 out of 10 changed files in this pull request and generated 12 comments.
Show a summary per file
| File | Description |
|---|---|
| pkg/controllers/resourcesyncer/secret/controller.go | Core reconciler that watches secrets and syncs their data to filesystem directories |
| pkg/controllers/resourcesyncer/secret/controller_test.go | Unit tests for the secret syncer controller |
| cmd/resource-syncer/main.go | Main entry point with CLI flags and manager setup |
| cmd/resource-syncer/README.md | Documentation for the resource syncer component |
| image/Dockerfile | Adds Docker image build stage for resource-syncer |
| hack/lib/build.sh | Adds resource-syncer to build targets |
| hack/lib/image.sh | Configures image prefix for resource-syncer |
| Makefile | Adds resource-syncer to the list of built commands |
| go.mod | Adds afero filesystem abstraction dependency |
| go.sum | Checksums for afero dependency |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| if fileData.isDir { | ||
| err := f.MkdirAll(path, 0o755) | ||
| assert.NoError(t, err) | ||
| } | ||
|
|
||
| dir := filepath.Dir(path) | ||
| err := f.MkdirAll(dir, 0o755) | ||
| assert.NoError(t, err) | ||
| err = afero.WriteFile(f, path, fileData.data, 0o644) | ||
| assert.NoError(t, err) |
There was a problem hiding this comment.
The setupFs function has a logic error. For directory entries (when fileData.isDir is true), it creates the directory but then continues to write it as a file on line 324. This will fail because you cannot write file data to a directory path. Either skip the file write for directories with a continue statement after line 318, or restructure the logic to handle directories separately.
| c := &cases[i] | ||
| t.Run(c.desc, func(tt *testing.T) { | ||
| fc := client.NewFakeClient(c.objs...) | ||
| afero.NewMemMapFs() |
There was a problem hiding this comment.
Unused variable: afero.NewMemMapFs() is called but the result is not used. This line should be removed as setupFs creates and returns the filesystem.
| afero.NewMemMapFs() |
| r := Reconciler{ | ||
| Client: fc, | ||
| Labels: c.labels, | ||
| BaseDirFS: dirFs, |
There was a problem hiding this comment.
The Reconciler is missing initialization of the cache field. In the test setup, cache is not initialized, which will cause a nil map panic when the reconciler tries to access r.cache[s.Name] on line 180 of the controller. Add cache: map[string]string{} to the Reconciler initialization.
| BaseDirFS: dirFs, | |
| BaseDirFS: dirFs, | |
| cache: map[string]string{}, |
| @@ -0,0 +1,3 @@ | |||
| # Resource Syncer | |||
|
|
|||
| This binary is designed for syncing resources(secrets, configmap, etc...) of kubernetes into pods | |||
There was a problem hiding this comment.
The README description mentions "configmap" but the implementation only handles secrets. Either update the description to be accurate ("This binary is designed for syncing secrets into pods") or add a note that configmap support is planned for the future.
| This binary is designed for syncing resources(secrets, configmap, etc...) of kubernetes into pods | |
| This binary is designed for syncing secrets of Kubernetes into pods. | |
| > Note: Support for syncing configmaps is planned for a future release. |
| client.InNamespace(r.Namespace), | ||
| client.MatchingLabels(r.Labels), | ||
| ); err != nil { | ||
| logger.Error(err, "cannot list secrets", "namepsace", r.Namespace, "labels", r.Labels) |
There was a problem hiding this comment.
Typo in log message: "namepsace" should be "namespace".
| logger.Error(err, "cannot list secrets", "namepsace", r.Namespace, "labels", r.Labels) | |
| logger.Error(err, "cannot list secrets", "namespace", r.Namespace, "labels", r.Labels) |
| failed := true | ||
| for range maxRetryTimes { | ||
| // has synced, no need to list secrets again | ||
| if r.hasSynced.Load() { | ||
| return | ||
| } | ||
|
|
||
| nctx, cancel := context.WithTimeout(ctx, defaultTimeout) | ||
| defer cancel() | ||
|
|
||
| if err := r.Client.List( | ||
| nctx, | ||
| &sl, | ||
| client.InNamespace(r.Namespace), | ||
| client.MatchingLabels(r.Labels), | ||
| ); err != nil { | ||
| logger.Error(err, "cannot list secrets", "namepsace", r.Namespace, "labels", r.Labels) | ||
| } else { | ||
| failed = false | ||
| break | ||
| } | ||
| } | ||
|
|
||
| if !failed && len(sl.Items) == 0 { | ||
| if r.hasSynced.CompareAndSwap(false, true) { | ||
| logger.Info("no secrets, hasSynced set to true") | ||
| } | ||
|
|
||
| return | ||
| } | ||
| } |
There was a problem hiding this comment.
The variable failed is set but the retry logic doesn't use it to handle the failure case. If all retries fail, the goroutine continues to check len(sl.Items) == 0 with potentially stale data from the last failed attempt. Consider logging an error or handling the case when all retries are exhausted.
| errs = append(errs, err) | ||
| } | ||
| files[name] = struct{}{} | ||
| } |
There was a problem hiding this comment.
The resourceVersionFile constant should be added to the keep map to prevent it from being cleaned up as an unknown file. Currently, when cleaning up the secret directory, the resourceVersion file is not in the files map (only actual secret data keys are), so it could be incorrectly removed.
| } | |
| } | |
| // Ensure resourceVersionFile is not deleted during cleanup | |
| files[resourceVersionFile] = struct{}{} |
cmd/resource-syncer/main.go
Outdated
| setupLog.Info("current config", "config", &cfg) | ||
|
|
||
| if err := run(&cfg); err != nil { | ||
| setupLog.Error(err, "unable to new manager") |
There was a problem hiding this comment.
The error message is misleading. It says "unable to new manager" but the run function can fail for multiple reasons (manager creation, directory setup, controller setup, health checks, or starting the manager). Use a more generic message like "unable to run resource syncer" or pass through the actual error from run().
| setupLog.Error(err, "unable to new manager") | |
| setupLog.Error(err, "unable to run resource syncer") |
| Labels map[string]string | ||
| BaseDirFS afero.Fs | ||
|
|
||
| // secret name to it's resourceVersion |
There was a problem hiding this comment.
Typo in comment: "it's" should be "its" (possessive form doesn't use an apostrophe).
| // secret name to it's resourceVersion | |
| // secret name to its resourceVersion |
| fake.ResourceVersion[corev1.Secret]("test"), | ||
| fake.Label[corev1.Secret]("test", "test"), | ||
| // change data to test whether update is actually skipped | ||
| // not really happended in real world |
There was a problem hiding this comment.
Typo in comment: "happended" should be "happened".
| // not really happended in real world | |
| // not really happened in real world |
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #6595 +/- ##
==========================================
+ Coverage 41.38% 41.47% +0.08%
==========================================
Files 348 349 +1
Lines 19885 20043 +158
==========================================
+ Hits 8230 8313 +83
- Misses 11655 11730 +75
Flags with carried forward coverage won't be shown. Click here to find out more. 🚀 New features to boost your workflow:
|
Signed-off-by: liubo02 <liubo02@pingcap.com>
|
/test pull-e2e |
1 similar comment
|
/test pull-e2e |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: csuzhangxc The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
[LGTM Timeline notifier]Timeline:
|
|
@liubog2008: The following test failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
Resource syncer is designed to sync secrets or configmaps into the pod.
Usage Example