Conversation
jrbourbeau
left a comment
There was a problem hiding this comment.
Nice! cc @ian-r-rose if you get a chance to take a look
Unit Test ResultsSee test report for an extended history of previous test failures. This is useful for diagnosing flaky tests. 15 files ±0 15 suites ±0 7h 7m 52s ⏱️ + 35m 2s For more details on these failures, see this check. Results for commit 6a50f7f. ± Comparison against base commit d6160c8. ♻️ This comment has been updated with latest results. |
ian-r-rose
left a comment
There was a problem hiding this comment.
Nice, thanks for this @fjetter! I think this is looking pretty good. I believe we were already pretty close to hitting the maximum requests-per-hour with this workflow, so we should probably keep an eye on this to make sure the extra requests to the jobs endpoint doesn't tip us over the edge
|
If we hit the API rate limiting I suggest we cache the shelve artifacts which buffer the API requests. Since in this PR we're shelving the raw XML response instead of the parsed dataframe we can do this safely without destroying forward/backwards compatibility of the script and it should be safe to reuse these file caches |
ian-r-rose
left a comment
There was a problem hiding this comment.
LGTM, thanks @fjetter
By fetching the Jobs info we can show the URL pointing to the actual job (e.g. OSX noci1)
With this PR
https://github.com/dask/distributed/runs/7666177056?check_suite_focus=true
Without
https://github.com/dask/distributed/actions/runs/2794492651
It also reduces the size of the HTML page generated by shrinking the DF before rendering. The HTML grew to 10-20MB already