test: use job metadata to avoid test flakes#155
Conversation
|
@crwilcox I am trying with a |
|
@tswast I think this is reasonable. If you see the go example this is the strategy there, though we don't wait 10s. Instead it does an exponential backoff up to 3 times. That way most cases don't take that much longer. |
tests/system/reader/test_reader.py
Outdated
| # Sleep for a moment to give use some wiggle room in case the BigQuery | ||
| # snapshot time and our times are out-of-sync. | ||
| # https://github.com/googleapis/python-bigquery-storage/issues/151 | ||
| time.sleep(10) |
There was a problem hiding this comment.
What is the point of the initial sleep? it seems the second is to ensure that the snapshot time requested is in the past?
There was a problem hiding this comment.
The flake is actually that there are 0 rows, so the snapshot time is too early in that case.
That said, Seth makes a good point that I can use the times from the table metadata.
shollyman
left a comment
There was a problem hiding this comment.
If you have a reference to the table metadata as well as the job statistics from the load you're doing, you may be able to adapt the logic from the table delete/undelete snippet and avoid the wasted time sleeping.
|
@shollyman Good point! I've updated the test to use the times from the job metadata. |
|
This test flaked again. https://source.cloud.google.com/results/invocations/f821c579-a7fb-45a7-9110-dad4dd830842/targets/github%2Fpython-bigquery-storage/tests Looks like we should add a |
Follow-up to #155 With 2 load jobs right in a row, we see to be hitting rate limiting. Add `retry_403` from `google-cloud-bigquery` tests, which seem to successfully work around this. Thank you for opening a Pull Request! Before submitting your PR, there are a few things you can do to make sure it goes smoothly: - [ ] Make sure to open an issue as a [bug/issue](https://github.com/googleapis/python-bigquery-storage/issues/new/choose) before writing your code! That way we can discuss the change, evaluate designs, and agree on the general idea - [ ] Ensure the tests and linter pass - [ ] Code coverage does not decrease (if any source code was changed) - [ ] Appropriate docs were updated (if necessary) Fixes #151 Fixes #161 🦕
Follow-up to googleapis/python-bigquery-storage#155 With 2 load jobs right in a row, we see to be hitting rate limiting. Add `retry_403` from `google-cloud-bigquery` tests, which seem to successfully work around this. Thank you for opening a Pull Request! Before submitting your PR, there are a few things you can do to make sure it goes smoothly: - [ ] Make sure to open an issue as a [bug/issue](https://github.com/googleapis/python-bigquery-storage/issues/new/choose) before writing your code! That way we can discuss the change, evaluate designs, and agree on the general idea - [ ] Ensure the tests and linter pass - [ ] Code coverage does not decrease (if any source code was changed) - [ ] Appropriate docs were updated (if necessary) Fixes #151 Fixes #161 🦕
Thank you for opening a Pull Request! Before submitting your PR, there are a few things you can do to make sure it goes smoothly:
Fixes #151 🦕