Skip to content

fix: improve chart rendering tests and response handling#3164

Merged
robfrank merged 1 commit intomainfrom
fix/e2e-studio
Jan 17, 2026
Merged

fix: improve chart rendering tests and response handling#3164
robfrank merged 1 commit intomainfrom
fix/e2e-studio

Conversation

@robfrank
Copy link
Collaborator

No description provided.

@robfrank robfrank added this to the 26.1.1 milestone Jan 17, 2026
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @robfrank, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request refines existing end-to-end tests to enhance their stability and accuracy. The changes primarily focus on improving synchronization mechanisms within tests, ensuring that UI elements like charts and data tables are fully rendered and populated with data before assertions are made. This addresses potential flakiness due to asynchronous operations and improves the overall reliability of the test suite.

Highlights

  • Improved Chart Rendering Tests: The apexcharts-upgrade.spec.ts test now explicitly waits for the relevant API response (api/v1/server) before asserting chart visibility and data points, making the test more robust against timing issues and ensuring data is loaded.
  • Enhanced Chart Visibility Checks: A new explicit wait has been added to ensure at least one chart SVG is visible before proceeding to count all charts, preventing premature assertions and improving test reliability.
  • More Reliable DataTables Test: The datatables-compatibility.spec.ts test now includes a longer waitForTimeout before switching to the Table tab and an explicit expect().toBeVisible() with a timeout for the data table after switching, addressing potential rendering delays for complex data.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@mergify
Copy link
Contributor

mergify bot commented Jan 17, 2026

🧪 CI Insights

Here's what we observed from your CI run for b1bb7b9.

🟢 All jobs passed!

But CI Insights is watching 👀

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request improves the E2E tests by enhancing response handling and fixing potential flakiness in chart rendering tests. The changes in apexcharts-upgrade.spec.ts correctly wait for API responses before making assertions, which is a solid improvement. However, in datatables-compatibility.spec.ts, a fixed timeout (waitForTimeout) has been introduced, which is an anti-pattern that can lead to flaky tests. My review includes suggestions to replace this with more robust, condition-based waits to improve test reliability.

Comment on lines +562 to +566
// Wait a moment for graph rendering to stabilize before switching tabs
await page.waitForTimeout(1000);

// Switch to Table tab
await page.locator('a[href="#tab-table"]').click({ force: true });
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Using page.waitForTimeout() is an anti-pattern in Playwright tests as it can lead to flakiness. The test might pass locally but fail in a different environment (like CI) where rendering takes more or less time. It's better to wait for a specific condition or element state.

Since the comment mentions waiting for graph rendering to stabilize, you should wait for a specific element from the graph to be ready. For example, you can wait for the graph canvas to become visible. This provides a more reliable signal that it's safe to proceed.

Additionally, the use of { force: true } is often a sign that the previous wait is insufficient. After switching to a more reliable wait, you should be able to remove force: true.

Suggested change
// Wait a moment for graph rendering to stabilize before switching tabs
await page.waitForTimeout(1000);
// Switch to Table tab
await page.locator('a[href="#tab-table"]').click({ force: true });
// Wait for graph rendering to complete before switching tabs
await expect(page.locator('canvas').last()).toBeVisible();
// Switch to Table tab
await page.locator('a[href="#tab-table"]').click();

Comment on lines +119 to +121
const responsePromise = page.waitForResponse(response =>
response.url().includes('api/v1/server')
);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

While waiting for the response is a great improvement for test stability, the URL matcher response.url().includes('api/v1/server') is a bit broad and could potentially match other API calls (e.g., /api/v1/server/status). It also doesn't guarantee the request was successful.

To make this more robust, I'd recommend also checking for a successful response status using response.ok(). This ensures you're waiting for a valid response before proceeding with assertions.

Suggested change
const responsePromise = page.waitForResponse(response =>
response.url().includes('api/v1/server')
);
const responsePromise = page.waitForResponse(response =>
response.url().includes('api/v1/server') && response.ok()
);

@robfrank robfrank merged commit fa7ddc5 into main Jan 17, 2026
16 of 19 checks passed
@robfrank robfrank deleted the fix/e2e-studio branch January 17, 2026 15:46
@codacy-production
Copy link

Coverage summary from Codacy

See diff coverage on Codacy

Coverage variation Diff coverage
-0.01%
Coverage variation details
Coverable lines Covered lines Coverage
Common ancestor commit (0ffdf13) 99570 55493 55.73%
Head commit (b1bb7b9) 99570 (+0) 55483 (-10) 55.72% (-0.01%)

Coverage variation is the difference between the coverage for the head and common ancestor commits of the pull request branch: <coverage of head commit> - <coverage of common ancestor commit>

Diff coverage details
Coverable lines Covered lines Diff coverage
Pull request (#3164) 0 0 ∅ (not applicable)

Diff coverage is the percentage of lines that are covered by tests out of the coverable lines that the pull request added or modified: <covered lines added or modified>/<coverable lines added or modified> * 100%

See your quality gate settings    Change summary preferences

robfrank added a commit that referenced this pull request Feb 11, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant