Allow incremental bulk request execution#111865
Allow incremental bulk request execution#111865Tim-Brooks merged 9 commits intoelastic:partial-rest-requestsfrom
Conversation
|
Pinging @elastic/es-distributed (Team:Distributed) |
server/src/main/java/org/elasticsearch/action/bulk/IncrementalBulkService.java
Show resolved
Hide resolved
server/src/main/java/org/elasticsearch/action/bulk/IncrementalBulkService.java
Show resolved
Hide resolved
| handleBulkFailure(isFirstRequest, e); | ||
| } | ||
| }, nextItems)); | ||
| incrementalRequestSubmitted = true; |
There was a problem hiding this comment.
Should we set this before invoke client.bulk (need to move out the isFirstRequest calculation? While this might work in a netty eventloop with single thread per request I wonder if we want to rely on that? I.e., once we invoke nextItems.run(), I could see this method being invoked in another thread (perhaps even same, in case the bulk request fails) as an option (though perhaps not in netty, but that seems subtle to rely on).
| @Override | ||
| public void onResponse(BulkResponse bulkResponse) { | ||
| responses.add(bulkResponse); | ||
| releaseCurrentReferences(); |
There was a problem hiding this comment.
It would be nice to null out bulkRequest here. That would also allow us to assert that bulkRequest != null in add/lastItems to verify the interaction. I think doing it in releaseCurrentReferences could be beneficial.
|
|
||
| @Override | ||
| public void onFailure(Exception e) { | ||
| handleBulkFailure(isFirstRequest, e); |
There was a problem hiding this comment.
I think a failure here results in us sending back a response, also if there was never an addItems call? I think the after-block below need to check for a globalFailure and call onFailure in that case.
Allow a single bulk request to be passed to Elasticsearch in multiple parts. Once a certain memory threshold or number of operations have been received, the request can be split and submitted for processing.
Allow a single bulk request to be passed to Elasticsearch in multiple parts. Once a certain memory threshold or number of operations have been received, the request can be split and submitted for processing.
Allow a single bulk request to be passed to Elasticsearch in multiple parts. Once a certain memory threshold or number of operations have been received, the request can be split and submitted for processing.
Allow a single bulk request to be passed to Elasticsearch in multiple parts. Once a certain memory threshold or number of operations have been received, the request can be split and submitted for processing.
This commit back ports all of the work introduced in: #113044 * #111438 - 5e1f655 * #111865 - 478baf1 * #112179 - 1b77421 * #112227 - cbcbc34 * #112267 - c00768a * #112154 - a03fb12 * #112479 - 95b42a7 * #112608 - ce2d648 * #112629 - 0d55dc6 * #112767 - 2dbbd7d * #112724 - 58e3a39 * dce8a0b * #112974 - 92daeeb * 529d349 * #113161 - e3424bd
Allow a single bulk request to be passed to Elasticsearch in multiple
parts. Once a certain memory threshold or number of operations have
been received, the request can be split and submitted for processing.