-
Notifications
You must be signed in to change notification settings - Fork 102
Handle passing of ref beacons in response msgs #1863
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Orca Security Scan Summary
| Status | Check | Issues by priority | |
|---|---|---|---|
| Secrets | View in Orca |
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## dev/1.34 #1863 +/- ##
============================================
- Coverage 86.80% 86.76% -0.05%
============================================
Files 273 273
Lines 19661 19673 +12
============================================
+ Hits 17067 17069 +2
- Misses 2594 2604 +10 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
tsmith023
added a commit
that referenced
this pull request
Nov 10, 2025
* flat index: Add support for RQ and include cache param * Update tests for RQ * Add function to retry if http error * Add 1.34 CI and test configuration and move function to conftest * Update comments * Add 134 version * Set 1.34 dev image for CI jobs * Add ACORN as defaul filter strategy in 1.34 * Comment backup test temporarily * Introduce `batch.experimental()` while server-side batching is in beta (#1765) * Add client-side changes to handle new server-side batching in 1.33 * Update images in CI * Update 1.33 image in CI * Alter test or lazy shard loading for new potential server behaviour * Change other lazy loading test too * Fix CI image for 1.33 * Update protos, fix setting of batch client in wrapper to avoid races with connection * Remove debug assert in test * Update new batch to use different modes with server, update CI image * Refactor to changed server batching options * Throw error if using automatic batching with incompatible server * Add exponential backoff retry to stream reconnect method * Remove timeout and retries from new grpc methods * Only delete key if present in dict * Close before re-connecting, reset rec num objs on shutdown * Update to use latest protos and behaviour * Improve logging using .automatic() * Update CI image to latest server build * Fix testing issues with new versions * Attempt fixes for tests again * Add ability to retry certain server-emitted full errors, e.g. temporary replication problems * Attempt fixes of flakes * Update to use latest server impl and CI image * Update to use latest dev server version * Rename from automatic to experimental, bump CI version to latest RC * Push ongoing changes * Update to use latest server image * Update to use latest server changes * Undo debug changes to conftest * Update to use latest server image * Make internal send/recv queue size 1 and sleep while shutdown to avoid pushing to it * Update to use latest server image * Fix shutting down message handling * Skip backoff handling if client has closed the stream * Remove unused code * Don't print backoff adjustments when shutting down * Improve shutting down log * Attempt to catch last req that can be lost during shutdown * Avoid circular import * Remove last_req wrapping logic from stream, reduce logging, update image in ci * Close the client-side of the stream on shutdown, sleep for backoff during req generation * Update CI image * Only log waiting for stream re-establishment once * Switch from arm to amd in CI * Shutdown client-side stream regardless of size of __reqs queue * Increase timeout when waiting for req to send, don't use queue size in if due to unreliability * Use sentinel in req put/get to avoid inaccurate block timeouts * Update CI image * Correctly populate batch.results * Update CI images * Assert indexing status in one of the allowed values rather than a specific value * Undo debug changes in tests * Update to match new server impl * Update to use latest server image * Only start threads once to avoid runtime error when handling shutdown * Update CI images * Hard-code SSB concurrency to 1 for now * Fix collection.batch.automatic * Correct logic in `_BgThreads.is_alive` * Adjust default batch size to align with server default and avoid overloading server too fast * Update CI images and version checks in tests * Update to use latest server behaviour around backoffs and uuid/err results * Lock once when reading batch results from stream * Interpret context canceled as ungraceful shutdown to be restarted by client * Use backoff message to adjust batch size * Start batching with smallest allowed server value * Add extra log in batch send * Reintroduce timeout when getting from queue * Add log to empty queue * Add log to batch recv restart * Remove timeout when getting from internal queue * Only update batch size if value has changed * Track then log total number of objects pushed by client * WIP: receive shutdown as message and not rpc error * Move result writing inside message.results case * Add missing proto changes * Update CI image * Improve resiliance on unexpected server behaviour --------- Co-authored-by: Dirk Kulawiak <dirk@semi.technology> * Ensure created backup in test finishes before starting new test * Wait until all nodes are back and healthy before reconnecting when CL=ALL * Handle passing of ref beacons in response msgs (#1863) --------- Co-authored-by: Rodrigo Lopez <rodrigo.lopez@weaviate.io> Co-authored-by: Dirk Kulawiak <dirk@weaviate.io> Co-authored-by: Dirk Kulawiak <dirk@semi.technology>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.