Skip to content

Force kill testcluster nodes#37353

Merged
alpar-t merged 7 commits intoelastic:masterfrom
alpar-t:testclusters-force-kill-clusters
Feb 18, 2019
Merged

Force kill testcluster nodes#37353
alpar-t merged 7 commits intoelastic:masterfrom
alpar-t:testclusters-force-kill-clusters

Conversation

@alpar-t
Copy link
Copy Markdown
Contributor

@alpar-t alpar-t commented Jan 11, 2019

Since we are not reusing them, no point in waiting for a clean stop.

@alpar-t alpar-t requested a review from rjernst January 11, 2019 10:36
@elasticmachine
Copy link
Copy Markdown
Collaborator

Pinging @elastic/es-core-infra

@alpar-t
Copy link
Copy Markdown
Contributor Author

alpar-t commented Jan 12, 2019

@elasticmachine run the gradle build tests 1

Copy link
Copy Markdown
Member

@rjernst rjernst left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just one ask

);
if (processHandle.isAlive()) {
processHandle.destroy();
if (forcibly) {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the order of this all can be cleaned up to not have doubling up on waits.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rjernst done, ready for review

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We're still doing multiple aliveness checks when we already know it is killed. Here is what I was envisioning:

if (processHandle.isAlive() == false) {
    logger.info("Process was not running when we tried to terminate it.");
	return;
}

// Stop all children first, ES could actually be a child when there's some wrapper process like on Windows.
processHandle.children().forEach(each -> stopHandle(each, forcibly));

logProcessInfo("Terminating elasticsearch process" + (forcibly ? " forcibly " : "gratefully") + ":", processHandle.info());

if (forcibly) {
    processHandle.destroyForcibly();
} else {
    processHandle.destroy();
    waitForProcessToExit(processHandle);
    if (processHandle.isAlive()) {
	    logger.info("process did not terminate after {} {}, stopping it forcefully",
	                       ES_DESTROY_TIMEOUT, ES_DESTROY_TIMEOUT_UNIT);
	    processHandle.destroyForcibly();
    }
}

waitForProcessToExit(processHandle);
if (processHandle.isAlive()) {
    throw new TestClustersException("Was not able to terminate elasticsearch process");
}

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rjernst implemented as suggested

@alpar-t
Copy link
Copy Markdown
Contributor Author

alpar-t commented Jan 24, 2019

@elasticmachine run elasticsearch-ci/1 again

Copy link
Copy Markdown
Contributor

@mark-vieira mark-vieira left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Trivial typo. Otherwise LGTM.

processHandle.children().forEach(each -> stopHandle(each, forcibly));

logProcessInfo(
"Terminating elasticsearch process" + (forcibly ? " forcibly " : "gratefully") + ":",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typo, should be "gracefully". Not say we are not grateful 😉

Copy link
Copy Markdown
Member

@rjernst rjernst left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM too. One last nit.

} else {
processHandle.destroy();
waitForProcessToExit(processHandle);
if (processHandle.isAlive()) {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can invert this, so:

if (processHandle.isAlive() == false) {
  return
}
logger.info("process did not terminate after {} {}, stopping it forcefully",
                   ES_DESTROY_TIMEOUT, ES_DESTROY_TIMEOUT_UNIT);
processHandle.destroyForcibly();

@alpar-t alpar-t merged commit 23ab166 into elastic:master Feb 18, 2019
@alpar-t alpar-t added the v6.7.0 label Feb 18, 2019
@alpar-t alpar-t deleted the testclusters-force-kill-clusters branch February 18, 2019 12:31
alpar-t added a commit that referenced this pull request Feb 18, 2019
* Force kill testcluster nodes
alpar-t added a commit that referenced this pull request Feb 18, 2019
* Force kill testcluster nodes
alpar-t added a commit that referenced this pull request Feb 18, 2019
* Force kill testcluster nodes
jasontedor added a commit to jasontedor/elasticsearch that referenced this pull request Feb 18, 2019
…ate-file

* elastic/master:
  Remove tests and branches that will never execute (elastic#38772)
  also check ccr stats api return empty response in ensureNoCcrTasks()
  Add overlapping, before, after filters to intervals query (elastic#38999)
  Mute test elastic#38949
  Add remote recovery to ShardFollowTaskReplicationTests (elastic#39007)
  [ML] More advanced post-test cleanup of ML indices (elastic#39049)
  wait for shard to be allocated before executing a resume follow api
  Update track-total-hits.asciidoc
  Force kill testcluster nodes (elastic#37353)
  Make pullFixture a task dependency of resolveAllDependencies (elastic#38956)
  set minimum supported version (elastic#39043)
  Enforce Completion Context Limit (elastic#38675)
  Mute test
  Don't close caches while there might still be in-flight requests. (elastic#38958)
  Fix elastic#38623 remove xpack namespace REST API (elastic#38625)
  Add data frame feature (elastic#38934)
  Test bi-directional index following during a rolling upgrade. (elastic#38962)
  Generate mvn pom for ssl-config library (elastic#39019)
  Mute testRetentionLeaseIsRenewedDuringRecovery
@mark-vieira mark-vieira added the Team:Delivery Meta label for Delivery team label Nov 11, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

:Delivery/Build Build or test infrastructure >non-issue Team:Delivery Meta label for Delivery team v6.7.0 v7.2.0 v8.0.0-alpha1

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants