Skip to content

Do not set SO_LINGER to 0 when not shutting down#26871

Merged
Tim-Brooks merged 2 commits intoelastic:masterfrom
Tim-Brooks:do_not_so_linger
Oct 4, 2017
Merged

Do not set SO_LINGER to 0 when not shutting down#26871
Tim-Brooks merged 2 commits intoelastic:masterfrom
Tim-Brooks:do_not_so_linger

Conversation

@Tim-Brooks
Copy link
Copy Markdown
Contributor

This is a follow up to #26764. That commit set SO_LINGER to 0 in order
to fix a scenario where we were running out of resources during CI. We
are primarily interested in setting this to 0 when stopping the
tranport. Allowing TIMED_WAIT is standard for other failure scenarios
during normal operation.

Unfortunately this commit set SO_LINGER to 0 every time we close
NodeChannels. NodeChannels can be closed in case of an exception or
other failures (such as parsing a response). We want to only disable
linger when actually shutting down.

This is a follow up to elastic#26764. That commit set SO_LINGER to 0 in order
to fix a scenario where we were running out of resources during CI. We
are primarily interested in setting this to 0 when stopping the
tranport. Allowing TIMED_WAIT is standard for other failure scenarios
during normal operation.

Unfortunately this commit set SO_LINGER to 0 every time we close
NodeChannels. NodeChannels can be closed in case of an exception or
other failures (such as parsing a response). We want to only disable
linger when actually shutting down.
@Tim-Brooks Tim-Brooks added :Distributed/Network Http and internode communication implementations >bug review v6.1.0 v7.0.0 labels Oct 3, 2017
Copy link
Copy Markdown
Contributor

@original-brownbear original-brownbear left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

makes sense imo => LGTM :)

Copy link
Copy Markdown
Member

@jasontedor jasontedor left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

Copy link
Copy Markdown
Contributor

@s1monw s1monw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

left a suggestion LGTM otherwise

private final AtomicLong requestIdGenerator = new AtomicLong();
private final CounterMetric numHandshakes = new CounterMetric();
private static final String HANDSHAKE_ACTION_NAME = "internal:tcp/handshake";
private volatile boolean closingTransport = false;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can just use this.stopped() instead of having a separate boolean? we are setting the state value before we execute the doStop method

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great suggestion. I made that change.

@Tim-Brooks Tim-Brooks merged commit ca35fca into elastic:master Oct 4, 2017
Tim-Brooks added a commit that referenced this pull request Oct 4, 2017
This is a follow up to #26764. That commit set SO_LINGER to 0 in order
to fix a scenario where we were running out of resources during CI. We
are primarily interested in setting this to 0 when stopping the
tranport. Allowing TIMED_WAIT is standard for other failure scenarios
during normal operation.

Unfortunately this commit set SO_LINGER to 0 every time we close
NodeChannels. NodeChannels can be closed in case of an exception or
other failures (such as parsing a response). We want to only disable
linger when actually shutting down.
@Tim-Brooks Tim-Brooks deleted the do_not_so_linger branch November 14, 2018 14:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

>bug :Distributed/Network Http and internode communication implementations v6.1.0 v7.0.0-beta1

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants