Skip to content

Conversation

@IvanHunters
Copy link
Collaborator

@IvanHunters IvanHunters commented Jan 2, 2026

What this PR does

Release note

[kubernetes] Add lb tests for tenant k8s

Summary by CodeRabbit

  • Tests
    • Increased readiness and port-forward timeouts to improve stability.
    • Added full end-to-end provisioning and validation: automated namespace and backend deployment, load balancer provisioning, health checks with retries, reachability validation, and cleanup.
    • Provisioning sequence now runs earlier and is duplicated within the test flow, altering execution order and adding extra validation/cleanup steps.

✏️ Tip: You can customize this high-level summary in your review settings.

Signed-off-by: IvanHunters <xorokhotnikov@gmail.com>
@dosubot dosubot bot added the size:L This PR changes 100-499 lines, ignoring generated files. label Jan 2, 2026
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @IvanHunters, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the end-to-end testing suite for tenant Kubernetes clusters by introducing a comprehensive test for LoadBalancer services. The new test deploys a sample Nginx application, exposes it through a LoadBalancer, and then verifies external connectivity, ensuring that LoadBalancer provisioning and traffic routing function correctly within the tenant environment. Additionally, existing timeouts in the test script have been adjusted to improve overall test reliability.

Highlights

  • Increased Timeouts: The timeouts for Kubernetes resource readiness and API server port-forwarding have been extended to improve test stability and accommodate potential delays.
  • New LoadBalancer Test: A new end-to-end test has been added to validate the functionality of LoadBalancer services within tenant Kubernetes clusters.
  • Nginx Backend Deployment: The new test deploys a simple Nginx application and exposes it via a LoadBalancer service to simulate a real-world scenario.
  • LoadBalancer Connectivity Verification: A curl pod is used to actively check if the provisioned LoadBalancer is reachable and correctly routing traffic to the backend application.
  • Automated Cleanup: The test includes automated cleanup steps to remove the deployed Nginx application and LoadBalancer service after execution.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 2, 2026

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

📝 Walkthrough

Walkthrough

Added a new end-to-end provisioning and validation sequence to hack/e2e-apps/run-kubernetes.sh: creates a Namespace, Deployment (nginx), and LoadBalancer Service, waits for readiness and LB address, performs health checks, then cleans up; also increased several kubectl timeouts and duplicated the provisioning block within the script.

Changes

Cohort / File(s) Change summary
E2E script
hack/e2e-apps/run-kubernetes.sh
Added an inlined end-to-end provisioning sequence (Namespace, backend Deployment with readinessProbe, LoadBalancer Service), waits for pod readiness and LB provisioning, derives LB address, performs curl health checks with retries, captures/validates LB reachability, then deletes created resources. Increased kubectl wait timeout from 2m→5m and kubectl port-forward timeout from 300s→500s. The provisioning block is duplicated (appears twice) and inserted before existing node/cluster validations. Attention: duplicated logic, cleanup ordering, timeout values, and LB address parsing.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant Script as run-kubernetes.sh
  participant KubeAPI as Kubernetes API (kubectl)
  participant KubeNode as Node / kubelet
  participant LoadBalancer as Cloud LB / Service
  participant External as External HTTP client (curl)

  rect `#e6f2ff`
  Script->>KubeAPI: apply Namespace + Deployment + Service YAML
  end

  KubeAPI->>KubeNode: schedule Pod
  KubeNode-->>KubeAPI: pod Ready (readinessProbe)
  Note right of KubeAPI: wait for pod readiness (up to 5m)

  rect `#fff2e6`
  KubeAPI->>LoadBalancer: provision LoadBalancer for Service
  LoadBalancer-->>KubeAPI: LB address assigned
  Note right of LoadBalancer: wait for LB provisioning (with retries)
  end

  rect `#e8ffe6`
  Script->>External: curl http://<LB_ADDR>:<port> (retries)
  External-->>Script: HTTP 200 / reachable
  end

  alt success
    Script->>KubeAPI: delete Deployment & Service (cleanup)
  else failure
    Script->>Script: exit with error
  end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Poem

A rabbit hops, scripts in paw,
I spin up namespaces—oh what a draw! 🐇
Pods bloom, load balancers show,
I ping and wait, then watch them glow.
Cleanup done — a tidy meadow.

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title '[kubernetes] Add lb tests for tenant k8s' accurately describes the main change: adding load balancer tests for tenant Kubernetes, which matches the PR objectives and file summary.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
✨ Finishing touches
  • 📝 Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds end-to-end tests for LoadBalancer services in tenant Kubernetes clusters. The changes include increasing some timeouts and adding a new test sequence that deploys an application, exposes it via a LoadBalancer service, and verifies connectivity. My review found a few critical issues that will cause the test to fail: a mismatch between the Service selector and Pod labels, and a missing --kubeconfig flag for a kubectl command. I also found a potential race condition due to a short timeout and have suggestions to improve cleanup and code clarity. Please see the detailed comments.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
hack/e2e-apps/run-kubernetes.sh (1)

205-218: Inconsistent indentation.

The kubectl run block has inconsistent indentation compared to the rest of the function. Consider aligning with the surrounding code for readability.

🔎 Proposed fix
-  kubectl run -n tenant-test lb-check-${test_name} \
-  --rm -i --restart=Never \
-  --image=curlimages/curl \
-  --timeout=60s \
-  --command -- \
-  sh -c "
-    for i in \$(seq 1 20); do
-      echo \"Attempt \$i\";
-      curl -sf http://${LB_ADDR} && exit 0;
-      sleep 3;
-    done;
-    echo 'LoadBalancer not reachable';
-    exit 1
-  "
+  kubectl run -n tenant-test lb-check-${test_name} \
+    --rm -i --restart=Never \
+    --image=curlimages/curl \
+    --timeout=60s \
+    --command -- \
+    sh -c "
+      for i in \$(seq 1 20); do
+        echo \"Attempt \$i\";
+        curl -sf http://${LB_ADDR} && exit 0;
+        sleep 3;
+      done;
+      echo 'LoadBalancer not reachable';
+      exit 1
+    "
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2d6e50b and 5638a7e.

📒 Files selected for processing (1)
  • hack/e2e-apps/run-kubernetes.sh
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Build
🔇 Additional comments (5)
hack/e2e-apps/run-kubernetes.sh (5)

75-75: LGTM on timeout increases.

The extended timeouts (5m for tcp wait, 500s for port-forward) are reasonable to accommodate slower cluster provisioning scenarios.

Also applies to: 90-90


128-133: LGTM!

Namespace creation in the tenant cluster is correctly applied using the tenant kubeconfig.


136-165: LGTM!

Deployment configuration looks good with proper readiness probe configuration.


187-203: LGTM!

The LoadBalancer address retrieval correctly handles both IP and hostname scenarios, with proper validation.


220-222: LGTM!

Cleanup correctly removes the test resources from the tenant cluster.

Run curl directly from the testing container instead of creating
a separate pod with kubectl run. This avoids PodSecurity policy
violations and simplifies the test execution.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
@dosubot dosubot bot added size:M This PR changes 30-99 lines, ignoring generated files. and removed size:L This PR changes 100-499 lines, ignoring generated files. labels Jan 2, 2026
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
hack/e2e-apps/run-kubernetes.sh (1)

176-177: Service selector mismatch will cause test failure.

The selector app: "${test_name}-backend" does not match the pod labels app: backend and backend: "${test_name}-backend" from the Deployment. This results in zero endpoints, and the curl health check will fail.

🔎 Proposed fix
  selector:
-    app: "${test_name}-backend"
+    app: backend
+    backend: "${test_name}-backend"
🧹 Nitpick comments (1)
hack/e2e-apps/run-kubernetes.sh (1)

216-218: Cleanup approach is functional.

Deleting resources individually works. Consider also deleting the tenant-test namespace in the tenant cluster at cleanup to avoid leftover resources, though this is optional since the entire tenant cluster gets deleted at the end.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5638a7e and dd0bbd3.

📒 Files selected for processing (1)
  • hack/e2e-apps/run-kubernetes.sh
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Build
🔇 Additional comments (4)
hack/e2e-apps/run-kubernetes.sh (4)

75-75: LGTM on timeout increases.

The increased timeouts (2m→5m for TCP wait, 300s→500s for port-forward) appropriately accommodate the additional LoadBalancer provisioning and validation steps added to this test.

Also applies to: 90-90


127-133: Namespace creation is correct.

The namespace is properly created in the tenant cluster using the correct kubeconfig. Note: per past review feedback, "tenant-test" is also used in the management cluster which can be confusing, but this is a minor clarity concern.


135-165: Deployment configuration looks good.

The deployment has correctly matching labels and selectors, and includes an appropriate readiness probe with short intervals for the test scenario.


194-203: LGTM on LoadBalancer address retrieval.

The jsonpath concatenation pattern correctly handles both IP and hostname-based LoadBalancer implementations, with appropriate validation for empty addresses.

Comment on lines +205 to +214
for i in $(seq 1 20); do
echo "Attempt $i"
curl --silent --fail "http://${LB_ADDR}" && break
sleep 3
done

if [ "$i" -eq 20 ]; then
echo "LoadBalancer not reachable" >&2
exit 1
fi
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Retry loop logic has an edge case bug.

If curl succeeds on the 20th attempt, $i equals 20 and the check [ "$i" -eq 20 ] incorrectly reports failure. The current logic cannot distinguish between "failed all 20 times" and "succeeded on attempt 20".

🔎 Proposed fix using a success flag
-  for i in $(seq 1 20); do
+  lb_success=false
+  for i in $(seq 1 20); do
     echo "Attempt $i"
-    curl --silent --fail "http://${LB_ADDR}" && break
+    if curl --silent --fail "http://${LB_ADDR}"; then
+      lb_success=true
+      break
+    fi
     sleep 3
   done
 
-  if [ "$i" -eq 20 ]; then
+  if [ "$lb_success" != "true" ]; then
     echo "LoadBalancer not reachable" >&2
     exit 1
   fi
🤖 Prompt for AI Agents
In hack/e2e-apps/run-kubernetes.sh around lines 205 to 214, the retry loop
incorrectly treats a success on the 20th curl attempt as a failure because it
checks if i == 20 after the loop; change the logic to track success explicitly
(e.g., set a variable like success=0 before the loop, set success=1 and break
when curl succeeds) and after the loop check the success flag (exit with error
only if success is still 0) so a successful 20th attempt is considered success.

The Service selector was using app: "${test_name}-backend" but the
Deployment pod template has app: backend. Fixed selector to match
the actual pod labels so endpoints are created correctly.

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
@kvaps kvaps changed the title add lb tests for tenant k8s [kubernetes] dd lb tests for tenant k8s Jan 3, 2026
@kvaps kvaps changed the title [kubernetes] dd lb tests for tenant k8s [kubernetes] Add lb tests for tenant k8s Jan 3, 2026
@dosubot dosubot bot added the lgtm This PR has been approved by a maintainer label Jan 3, 2026
@kvaps kvaps added backport Should change be backported on previus release backport-previous labels Jan 3, 2026
@kvaps kvaps merged commit 7a964eb into main Jan 3, 2026
27 of 28 checks passed
@kvaps kvaps deleted the tests/tenant-loadbalancer branch January 3, 2026 07:30
@github-actions
Copy link

github-actions bot commented Jan 3, 2026

Successfully created backport PR for release-0.38:

@github-actions
Copy link

github-actions bot commented Jan 3, 2026

Successfully created backport PR for release-0.37:

kvaps added a commit that referenced this pull request Jan 3, 2026
kvaps added a commit that referenced this pull request Jan 8, 2026
<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[kubernetes] Add lb tests for tenant k8s
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Tests**
  * Increased readiness and port-forward timeouts to improve stability.
* Added full end-to-end provisioning and validation: automated namespace
and backend deployment, load balancer provisioning, health checks with
retries, reachability validation, and cleanup.
* Provisioning sequence now runs earlier and is duplicated within the
test flow, altering execution order and adding extra validation/cleanup
steps.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
kvaps added a commit that referenced this pull request Jan 8, 2026
<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[kubernetes] Add lb tests for tenant k8s
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Tests**
  * Increased readiness and port-forward timeouts to improve stability.
* Added full end-to-end provisioning and validation: automated namespace
and backend deployment, load balancer provisioning, health checks with
retries, reachability validation, and cleanup.
* Provisioning sequence now runs earlier and is duplicated within the
test flow, altering execution order and adding extra validation/cleanup
steps.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
kvaps added a commit that referenced this pull request Jan 9, 2026
<!-- Thank you for making a contribution! Here are some tips for you:
- Start the PR title with the [label] of Cozystack component:
- For system components: [platform], [system], [linstor], [cilium],
[kube-ovn], [dashboard], [cluster-api], etc.
- For managed apps: [apps], [tenant], [kubernetes], [postgres],
[virtual-machine] etc.
- For development and maintenance: [tests], [ci], [docs], [maintenance].
- If it's a work in progress, consider creating this PR as a draft.
- Don't hesistate to ask for opinion and review in the community chats,
even if it's still a draft.
- Add the label `backport` if it's a bugfix that needs to be backported
to a previous version.
-->

## What this PR does


### Release note

<!--  Write a release note:
- Explain what has changed internally and for users.
- Start with the same [label] as in the PR title
- Follow the guidelines at
https://github.com/kubernetes/community/blob/master/contributors/guide/release-notes.md.
-->

```release-note
[kubernetes] Add lb tests for tenant k8s
```

<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit

* **Tests**
  * Increased readiness and port-forward timeouts to improve stability.
* Added full end-to-end provisioning and validation: automated namespace
and backend deployment, load balancer provisioning, health checks with
retries, reachability validation, and cleanup.
* Provisioning sequence now runs earlier and is duplicated within the
test flow, altering execution order and adding extra validation/cleanup
steps.

<sub>✏️ Tip: You can customize this high-level summary in your review
settings.</sub>
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

backport Should change be backported on previus release backport-previous lgtm This PR has been approved by a maintainer size:M This PR changes 30-99 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants