Skip to content

Conversation

@lllamnyp
Copy link
Member

@lllamnyp lllamnyp commented Dec 4, 2025

What this PR does

This patch adds compilation and docker build steps for the backup controller as well as adding a Helm chart to deploy it as part of the PaaS bundles.

Release note

[backups] Build and deploy backup controller

Summary by CodeRabbit

  • New Features

    • Introduced backup controller component for managing backup plans and jobs with automatic cron expression validation and health monitoring.
  • Chores

    • Added Helm chart and deployment configuration for the backup controller in both full and hosted platform bundles.

✏️ Tip: You can customize this high-level summary in your review settings.

@dosubot dosubot bot added the size:L This PR changes 100-499 lines, ignoring generated files. label Dec 4, 2025
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 4, 2025

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

Walkthrough

A new backup-controller component is introduced with a complete Kubebuilder-based Go controller, Helm chart configuration, Docker image definition, platform bundle integration, and minimal logic updates to handle cron expression validation state.

Changes

Cohort / File(s) Summary
Build Configuration
Makefile
Added build step to compile backup-controller image from packages/system/backup-controller
Controller Implementation
cmd/backup-controller/main.go, internal/backupcontroller/plan_controller.go
Wired Kubebuilder/controller-runtime manager with API schemes, metrics, webhooks, TLS, and leader election. Updated plan reconciler to clear PlanConditionError when cron parsing succeeds.
Helm Chart Structure
packages/system/backup-controller/Chart.yaml, packages/system/backup-controller/Makefile, packages/system/backup-controller/images/backup-controller/Dockerfile, packages/system/backup-controller/values.yaml
Created Helm chart descriptor v2, build Makefile for image tagging and values update, multi-stage Dockerfile using golang:1.24-alpine, and default configuration values for replicas, debug mode, metrics, and resource requests/limits.
Helm Templates
packages/system/backup-controller/templates/{crds,deployment,rbac,rbac-bind,sa}.yaml
Added Kubernetes manifests for CRD aggregation, Deployment with metrics/health probes/tolerations, ClusterRole with backup.cozystack.io permissions, ClusterRoleBinding, and ServiceAccount.
Platform Bundle Integration
packages/core/platform/bundles/paas-full.yaml, packages/core/platform/bundles/paas-hosted.yaml
Registered backup-controller release (cozy-backup-controller chart, backup-controller namespace) with dependencies on cilium and kubeovn.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20–25 minutes

  • Areas requiring attention:
    • Verify controller reconciliation logic in plan_controller.go correctly handles condition state transitions and API updates
    • Confirm Helm template variable references (values.backupController.*) are correctly rendered across all manifest files
    • Validate RBAC permissions align with planned backup controller operations and follow least-privilege principle
    • Ensure Dockerfile and build configuration produce correctly tagged/referenced images in values injection

Poem

🐰 A backup controller hops into place,
With Helm charts and RBAC to set the pace,
Through Kubebuilder's scaffold it takes its stand,
Managing plans across the Kubernetes land!
Cron expressions now validated with care,
Error conditions cleared—backups everywhere! 🎉

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title '[backups] Build and deploy backup controller' directly and clearly summarizes the main change—adding build steps and Helm deployment for the backup controller.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/backup-controller-pt2

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @lllamnyp, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request establishes the foundational infrastructure for a new backup controller within the system. It enables the building and containerization of the controller's Go application and integrates its deployment into the existing PaaS bundles via a new Helm chart. This ensures the backup controller can be consistently deployed and managed across different PaaS environments.

Highlights

  • Backup Controller Integration: New compilation and Docker build steps have been added for the backup-controller.
  • Helm Chart Deployment: A Helm chart is introduced to deploy the backup-controller as part of the PaaS bundles (paas-full.yaml and paas-hosted.yaml).
  • Kubernetes Controller Setup: The core Go application for the backup-controller is added, including its main logic, scheme registration, and manager configuration for reconciliation.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@dosubot dosubot bot added the enhancement New feature or request label Dec 4, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new backup controller, including its build process and deployment via a Helm chart. The changes are a good step forward, but there are a few critical issues in the Kubernetes manifests that will prevent the controller from working correctly. Specifically, there's an incorrect dependency in one of the bundles and a misconfigured RBAC role binding. Additionally, the deployment manifest is missing crucial configurations like health probes and resource limits, and the RBAC permissions are overly permissive. I've provided detailed suggestions to fix these issues to ensure the controller is robust and secure.

releaseName: backup-controller
chart: cozy-backup-controller
namespace: cozy-backup-controller
dependsOn: [cilium,kubeovn]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The dependsOn field for the backup-controller release lists cilium and kubeovn. However, these releases are not part of the paas-hosted.yaml bundle, which will cause the deployment to fail. The dependency should be removed or changed to a release that exists in this bundle.

  dependsOn: []

roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: backup-controller
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The roleRef.name in this ClusterRoleBinding is backup-controller, but the ClusterRole defined in rbac.yaml is named backups.cozystack.io:core-controller. This mismatch will prevent the controller's service account from being granted any permissions, causing the controller to fail. The roleRef.name must match the metadata.name of the ClusterRole.

  name: backups.cozystack.io:core-controller

Comment on lines 25 to 37
containers:
- name: backup-controller
image: "{{ .Values.backupController.image }}"
args:
- --leader-elect
{{- if .Values.backupController.debug }}
- --zap-log-level=debug
{{- else }}
- --zap-log-level=info
{{- end }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The deployment configuration for the backup controller is missing several important settings for production-readiness:

  1. Health Probes: livenessProbe and readinessProbe are not defined. These are essential for Kubernetes to manage the pod's lifecycle correctly. The controller exposes /healthz and /readyz endpoints on port 8081 for this purpose.
  2. Resource Management: resources.requests and resources.limits for CPU and memory are missing. Setting these helps ensure predictable performance and scheduling.
  3. Metrics: Controller metrics are currently disabled by default. They should be enabled by passing the --metrics-bind-address flag to expose the metrics endpoint, which is crucial for monitoring.

I've suggested adding these configurations.

      containers:
      - name: backup-controller
        image: "{{ .Values.backupController.image }}"
        args:
        - --leader-elect
        - --metrics-bind-address=:8443
        {{- if .Values.backupController.debug }}
        - --zap-log-level=debug
        {{- else }}
        - --zap-log-level=info
        {{- end }}
        ports:
        - name: metrics
          containerPort: 8443
        - name: health
          containerPort: 8081
        readinessProbe:
          httpGet:
            path: /readyz
            port: health
          initialDelaySeconds: 5
          periodSeconds: 10
        livenessProbe:
          httpGet:
            path: /healthz
            port: health
          initialDelaySeconds: 15
          periodSeconds: 20
        resources:
          requests:
            cpu: 10m
            memory: 64Mi
          limits:
            cpu: 500m
            memory: 128Mi

Comment on lines +116 to +118
// TODO(user): If CertDir, CertName, and KeyName are not specified, controller-runtime will automatically
// generate self-signed certificates for the metrics server. While convenient for development and testing,
// this setup is not recommended for production.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This TODO comment highlights a potential issue for production environments regarding self-signed certificates for the metrics server. For production readiness, this should be addressed by providing a mechanism to supply proper certificates. Please consider creating a follow-up task to handle certificate management for the metrics endpoint.

Comment on lines 5 to 12
rules:
- apiGroups: ["backups.cozystack.io"]
resources: ["*"]
verbs: ["get", "list", "watch"]
- apiGroups: ["backups.cozystack.io"]
resources:
- backupjobs
verbs: ["*"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The permissions defined in this ClusterRole are overly permissive. It's a security best practice to follow the principle of least privilege by granting only the permissions that are strictly necessary.

  • The first rule grants get, list, watch on all (*) resources within the backups.cozystack.io API group. This should be scoped down to just the plans resource.
  • The second rule grants all (*) verbs on backupjobs. Based on the controller's logic, it only needs the create verb.

I recommend tightening these permissions as suggested.

rules:
- apiGroups: ["backups.cozystack.io"]
  resources: ["plans"]
  verbs: ["get", "list", "watch"]
- apiGroups: ["backups.cozystack.io"]
  resources: ["backupjobs"]
  verbs: ["create"]

@lllamnyp lllamnyp force-pushed the feat/backup-controller branch from cb24e7f to 47c4655 Compare December 4, 2025 16:11
@lllamnyp lllamnyp force-pushed the feat/backup-controller-pt2 branch from ff15585 to 9a0e632 Compare December 4, 2025 16:12
@lllamnyp lllamnyp force-pushed the feat/backup-controller branch from 47c4655 to a7b4239 Compare December 5, 2025 08:58
@lllamnyp lllamnyp force-pushed the feat/backup-controller-pt2 branch 2 times, most recently from 5c9f64d to fc9f355 Compare December 5, 2025 09:47
Base automatically changed from feat/backup-controller to main December 5, 2025 20:15
@dosubot dosubot bot added the lgtm This PR has been approved by a maintainer label Dec 6, 2025
## What this PR does

This patch adds compilation and docker build steps for the backup
controller as well as adding a Helm chart to deploy it as part of the
PaaS bundles.

### Release note

```release-note
[backups] Build and deploy backup controller
```

Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
@lllamnyp lllamnyp force-pushed the feat/backup-controller-pt2 branch from fc9f355 to 8928552 Compare December 7, 2025 11:36
@lllamnyp lllamnyp enabled auto-merge December 7, 2025 11:36
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
cmd/backup-controller/main.go (1)

109-119: Production certificate management remains unaddressed.

The TODO comment about self-signed certificates for the metrics server was already flagged in previous reviews. This should be tracked for production readiness.

🧹 Nitpick comments (1)
packages/system/backup-controller/Makefile (1)

7-18: Declare targets as .PHONY.

The image and image-backup-controller targets should be declared as .PHONY since they don't produce files with those names.

Apply this diff:

 include ../../../scripts/common-envs.mk
 include ../../../scripts/package.mk

+.PHONY: image image-backup-controller
+
 image: image-backup-controller
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 67ecf3d and 8928552.

📒 Files selected for processing (14)
  • Makefile (1 hunks)
  • cmd/backup-controller/main.go (1 hunks)
  • internal/backupcontroller/plan_controller.go (1 hunks)
  • packages/core/platform/bundles/paas-full.yaml (1 hunks)
  • packages/core/platform/bundles/paas-hosted.yaml (1 hunks)
  • packages/system/backup-controller/Chart.yaml (1 hunks)
  • packages/system/backup-controller/Makefile (1 hunks)
  • packages/system/backup-controller/images/backup-controller/Dockerfile (1 hunks)
  • packages/system/backup-controller/templates/crds.yaml (1 hunks)
  • packages/system/backup-controller/templates/deployment.yaml (1 hunks)
  • packages/system/backup-controller/templates/rbac-bind.yaml (1 hunks)
  • packages/system/backup-controller/templates/rbac.yaml (1 hunks)
  • packages/system/backup-controller/templates/sa.yaml (1 hunks)
  • packages/system/backup-controller/values.yaml (1 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.go

📄 CodeRabbit inference engine (AGENTS.md)

Use Controller-runtime patterns and kubebuilder style for Go code

Files:

  • internal/backupcontroller/plan_controller.go
  • cmd/backup-controller/main.go
**/Chart.yaml

📄 CodeRabbit inference engine (AGENTS.md)

Use Helm Charts with the umbrella pattern and vendor upstream charts in charts/ directory

Files:

  • packages/system/backup-controller/Chart.yaml
🧠 Learnings (4)
📓 Common learnings
Learnt from: CR
Repo: cozystack/cozystack PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-11-27T11:21:45.808Z
Learning: Applies to **/*.go : Use Controller-runtime patterns and kubebuilder style for Go code
📚 Learning: 2025-07-10T12:14:27.197Z
Learnt from: lllamnyp
Repo: cozystack/cozystack PR: 1161
File: packages/apps/virtual-machine/templates/dashboard-resourcemap.yaml:6-12
Timestamp: 2025-07-10T12:14:27.197Z
Learning: Kubernetes RBAC rules with resourceNames work correctly for list/watch verbs. When resourceNames is specified in an RBAC rule, it properly restricts access to only those named resources, even for list and watch operations. Examples: `kubectl get resource resourcename -w` watches for changes on a single resource, and `kubectl get resource --field-selector .metadata.name=resourcename` lists a specific resource. The Kubernetes API server correctly distinguishes such requests from their less specific counterparts.

Applied to files:

  • packages/system/backup-controller/templates/rbac.yaml
📚 Learning: 2025-11-27T11:21:45.808Z
Learnt from: CR
Repo: cozystack/cozystack PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-11-27T11:21:45.808Z
Learning: Applies to **/Chart.yaml : Use Helm Charts with the umbrella pattern and vendor upstream charts in `charts/` directory

Applied to files:

  • packages/system/backup-controller/templates/crds.yaml
  • packages/core/platform/bundles/paas-hosted.yaml
  • packages/system/backup-controller/Chart.yaml
  • packages/core/platform/bundles/paas-full.yaml
📚 Learning: 2025-11-27T11:21:45.808Z
Learnt from: CR
Repo: cozystack/cozystack PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-11-27T11:21:45.808Z
Learning: Applies to **/*.go : Use Controller-runtime patterns and kubebuilder style for Go code

Applied to files:

  • cmd/backup-controller/main.go
🧬 Code graph analysis (2)
internal/backupcontroller/plan_controller.go (1)
api/backups/v1alpha1/plan_types.go (1)
  • PlanConditionError (22-22)
cmd/backup-controller/main.go (4)
internal/sse/server.go (1)
  • Options (17-44)
internal/controller/dashboard/manager.go (1)
  • NewManager (51-54)
pkg/apiserver/apiserver.go (1)
  • Scheme (53-53)
internal/backupcontroller/plan_controller.go (1)
  • PlanReconciler (28-31)
🪛 checkmake (0.2.2)
packages/system/backup-controller/Makefile

[warning] 1-1: Missing required phony target "all"

(minphony)


[warning] 1-1: Missing required phony target "clean"

(minphony)


[warning] 1-1: Missing required phony target "test"

(minphony)


[warning] 7-7: Target "image" should be declared PHONY.

(phonydeclared)

🪛 YAMLlint (1.37.1)
packages/system/backup-controller/templates/crds.yaml

[error] 1-1: syntax error: expected the node content, but found '-'

(syntax)

packages/system/backup-controller/templates/deployment.yaml

[error] 31-31: syntax error: could not find expected ':'

(syntax)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Build
🔇 Additional comments (17)
Makefile (1)

18-18: LGTM!

The backup-controller image build step is properly integrated into the build sequence and follows the established pattern.

internal/backupcontroller/plan_controller.go (1)

65-76: LGTM!

The error condition reset logic correctly clears stale error states when cron parsing succeeds, following controller-runtime best practices. The implementation properly checks for existing conditions and handles status updates with appropriate error handling.

Based on learnings, this follows Controller-runtime patterns and kubebuilder style appropriately.

packages/system/backup-controller/templates/sa.yaml (1)

1-4: LGTM!

The ServiceAccount definition is correct and appropriately minimal for a Helm chart template.

packages/system/backup-controller/Chart.yaml (1)

1-3: LGTM!

The Chart descriptor is appropriately structured for a custom controller. The umbrella pattern guidance applies to charts that vendor upstream components, whereas this is a custom backup-controller implementation.

As per coding guidelines, this follows the Helm chart structure appropriately for a custom controller.

packages/system/backup-controller/images/backup-controller/Dockerfile (2)

3-23: LGTM!

The multi-stage build follows best practices for containerizing Go applications:

  • Cross-platform build support with TARGETOS/TARGETARCH
  • Static linking with CGO disabled for scratch base
  • Minimal final image with only binary and CA certificates

1-1: Go 1.24 is available and validgolang:1.24-alpine is a legitimate Docker image tag. Go 1.24 was released before December 2025, so the build will not fail due to version unavailability. Consider updating to Go 1.25.5 (the current stable release as of December 2025) for access to the latest features and security patches.

Likely an incorrect or invalid review comment.

packages/system/backup-controller/templates/crds.yaml (1)

1-4: LGTM! Static analysis error is a false positive.

The Helm template correctly aggregates CRD definitions from the definitions/* directory. The YAMLlint syntax error is expected since static analysis tools don't understand Go template syntax.

packages/system/backup-controller/templates/rbac-bind.yaml (1)

1-12: ClusterRoleBinding is correctly configured with valid reference.

The ClusterRoleBinding properly references the ClusterRole backups.cozystack.io:core-controller which exists in rbac.yaml with matching metadata. The binding subject correctly points to the backup-controller ServiceAccount, and the namespace uses the appropriate template variable.

packages/system/backup-controller/templates/rbac.yaml (1)

5-11: LGTM! ClusterRole permissions are properly scoped.

The RBAC permissions are well-defined and follow the principle of least privilege. The controller has read access to Plans and appropriate access to manage BackupJobs.

packages/system/backup-controller/values.yaml (1)

1-14: LGTM! Default values are well-configured.

The default configuration is production-ready with:

  • High availability (2 replicas)
  • Metrics enabled for observability
  • Conservative resource limits
  • Secure metrics endpoint (HTTPS on :8443)
cmd/backup-controller/main.go (4)

48-53: LGTM! Scheme initialization follows kubebuilder patterns.

The scheme registration correctly includes both the core Kubernetes types and the custom backups v1alpha1 API types, following controller-runtime best practices.

As per coding guidelines, this follows the expected controller-runtime patterns and kubebuilder style.


80-93: LGTM! HTTP/2 hardening improves security posture.

The conditional disabling of HTTP/2 by default protects against known CVEs (GHSA-qppj-fm5r-hxr3, GHSA-4374-p667-p6c8). The implementation correctly applies TLS options to both webhook and metrics servers.


150-156: LGTM! PlanReconciler setup follows controller-runtime patterns.

The reconciler is correctly initialized with the manager's client and scheme, following standard kubebuilder conventions.

As per coding guidelines, this follows the expected controller-runtime patterns and kubebuilder style.


121-124: Verify the need for increased rate limits.

The Kubernetes client QPS and Burst have been increased significantly (QPS: 5.0 → 50.0, Burst: 10 → 100). This pattern is applied consistently across three controllers (backup-controller, lineage-controller-webhook, and cozystack-controller), yet the PlanReconciler's workload is cron-based with 5-minute minimum intervals, not high-frequency operations. No documentation or load testing results justify these aggressive limits.

Please confirm:

  1. What is the expected reconciliation workload that requires these high limits?
  2. Have you load-tested the controller to verify these values are appropriate?
  3. Could these limits impact API server performance under load?
packages/core/platform/bundles/paas-full.yaml (1)

115-120: LGTM! Backup controller integration follows bundle conventions.

The backup-controller release is properly configured with:

  • Correct chart reference (cozy-backup-controller)
  • Appropriate dependencies on core networking (cilium, kubeovn)
  • Dedicated namespace (cozy-backup-controller)
  • Standard naming conventions
packages/system/backup-controller/templates/deployment.yaml (2)

25-57: LGTM! Deployment is production-ready with comprehensive configuration.

The deployment now includes all essential production settings that were previously flagged:

  • Health probes for both liveness and readiness
  • Conditional metrics endpoint configuration
  • Resource requests and limits
  • Debug logging control

38-40: Potential issue with port extraction from bindAddress.

Line 40 uses mustLast to extract the port from bindAddress, which assumes the value contains a colon. If bindAddress is set to a port-only value like "8443" instead of ":8443", this will fail during template rendering.

Consider adding validation or using a more robust extraction method:

         ports:
         - name: metrics
-          containerPort: {{ split ":" .Values.backupController.metrics.bindAddress | mustLast }}
+          containerPort: {{ trimPrefix ":" .Values.backupController.metrics.bindAddress }}
         - name: health
           containerPort: 8081

Alternatively, if the format is guaranteed to be ":port", document this requirement in the values.yaml comments.

Likely an incorrect or invalid review comment.

Comment on lines +59 to +63
- name: backup-controller
releaseName: backup-controller
chart: cozy-backup-controller
namespace: cozy-backup-controller

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, check if the backup-controller directory exists and explore its structure
find . -type d -name "backup-controller" | head -20

Repository: cozystack/cozystack

Length of output: 183


🏁 Script executed:

# Check the paas-hosted.yaml file to see how other releases define dependsOn
cat -n packages/core/platform/bundles/paas-hosted.yaml | head -80

Repository: cozystack/cozystack

Length of output: 2954


🏁 Script executed:

# Search for backup-controller related files to understand dependencies
fd backup-controller --type f | head -30

Repository: cozystack/cozystack

Length of output: 45


🏁 Script executed:

# Check if backup-controller templates exist and what they contain
find ./packages/system/backup-controller -type f -name "*.yaml" -o -name "*.yml" | head -20

Repository: cozystack/cozystack

Length of output: 779


🏁 Script executed:

# Search for webhook configurations in backup-controller
rg -l 'WebhookConfiguration' ./packages/system/backup-controller/ 2>/dev/null || echo "No WebhookConfiguration found"

Repository: cozystack/cozystack

Length of output: 92


🏁 Script executed:

# Check for cert-manager references in backup-controller
rg 'cert-manager' ./packages/system/backup-controller/ 2>/dev/null || echo "No cert-manager references found"

Repository: cozystack/cozystack

Length of output: 95


🏁 Script executed:

# Check the main chart files for dependencies
cat ./packages/system/backup-controller/Chart.yaml 2>/dev/null || echo "Chart.yaml not found"

Repository: cozystack/cozystack

Length of output: 206


🏁 Script executed:

# Check if backup-controller references cozystack-controller or cozystack-api
rg 'cozystack-controller|cozystack-api' ./packages/system/backup-controller/

Repository: cozystack/cozystack

Length of output: 45


🏁 Script executed:

# Check the deployment to understand what it needs
cat -n ./packages/system/backup-controller/templates/deployment.yaml | head -60

Repository: cozystack/cozystack

Length of output: 2121


Add explicit dependsOn field for consistency.

The backup-controller release definition is missing the dependsOn field, making it inconsistent with all other releases in the bundle. Every other release explicitly declares its dependencies (either empty or with listed dependencies).

 - name: backup-controller
   releaseName: backup-controller
   chart: cozy-backup-controller
   namespace: cozy-backup-controller
+  dependsOn: []

Backup-controller has no external dependencies (no webhooks, cert-manager references, or coupling to cozystack-controller/cozystack-api), so dependsOn: [] is appropriate.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- name: backup-controller
releaseName: backup-controller
chart: cozy-backup-controller
namespace: cozy-backup-controller
- name: backup-controller
releaseName: backup-controller
chart: cozy-backup-controller
namespace: cozy-backup-controller
dependsOn: []
🤖 Prompt for AI Agents
In packages/core/platform/bundles/paas-hosted.yaml around lines 59 to 63, the
backup-controller release block is missing the explicit dependsOn field used by
all other releases; add a dependsOn: [] entry under the backup-controller
release (keeping indentation consistent with other releases) so the block reads
with an explicit empty dependency list.

@lllamnyp lllamnyp merged commit 8989791 into main Dec 7, 2025
21 checks passed
@lllamnyp lllamnyp deleted the feat/backup-controller-pt2 branch December 7, 2025 12:20
@lllamnyp lllamnyp mentioned this pull request Dec 11, 2025
19 tasks
lllamnyp added a commit that referenced this pull request Jan 8, 2026
## What this PR does

This patch adds compilation and docker build steps for the backup
controller as well as adding a Helm chart to deploy it as part of the
PaaS bundles.

### Release note

```release-note
[backups] Build and deploy backup controller
```

(cherry picked from commit 8989791)
Signed-off-by: Timofei Larkin <lllamnyp@gmail.com>
kvaps added a commit that referenced this pull request Jan 16, 2026
…d backup system (#1867)

## What this PR does

Update changelog for v1.0.0-alpha.1 to include missing features:
- **Cozystack Operator**: New operator for Package and PackageSource
management (#1740, #1741, #1755, #1756, #1760, #1761)
- **Backup System**: Comprehensive backup functionality with Velero
integration (#1640, #1685, #1687, #1708, #1719, #1720, #1737, #1762)
- Add @androndo to contributors
- Update Full Changelog link to v0.38.0...v1.0.0-alpha.1

### Release note

```release-note
[docs] Update changelog for v1.0.0-alpha.1: add cozystack-operator and backup system
```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request lgtm This PR has been approved by a maintainer size:L This PR changes 100-499 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants