Skip to content

Use the RECV_STATUS op in ALTS handshake RPCs#20929

Merged
apolcyn merged 2 commits intogrpc:masterfrom
apolcyn:alts_recv_status
Nov 11, 2019
Merged

Use the RECV_STATUS op in ALTS handshake RPCs#20929
apolcyn merged 2 commits intogrpc:masterfrom
apolcyn:alts_recv_status

Conversation

@apolcyn
Copy link
Copy Markdown
Contributor

@apolcyn apolcyn commented Nov 5, 2019

This is a pre-req to #20722, which needs to make use of the RECV_STATUS op in order to manage a global queue. Note that though #20722 is meant to be a short-term change, this change is meant to be long term (the ALTS handshake RPC is currently somewhat unique in not doing this op).

Also, this is a replacement of #20687 (putting an explicit deadline on ALTS handshake RPCs is likely unnecessary, since a customized reconnect backoff time can serve that same purpose).

Some changes had to be made to the alts_handshaker_client_test and alts_tsi_handshaker_tests in order to make room for the new internal behaviors of alts_grpc_handshaker_client regarding lifetimes and how exactly the TSI next callback to be invoked.


This change is Reviewable

@apolcyn apolcyn added area/security lang/core release notes: no Indicates if PR should not be in release notes labels Nov 5, 2019
@apolcyn apolcyn marked this pull request as ready for review November 5, 2019 09:44
@apolcyn apolcyn assigned yihuazhang and markdroth and unassigned yihuazhang Nov 5, 2019
@apolcyn
Copy link
Copy Markdown
Contributor Author

apolcyn commented Nov 6, 2019

Recent updates fix a bug that this change originally introduced:

Before this change, when the security handshaker's peer endpoint read failed, it would fail the subchannel connect attempt and destruct its tsi_handshaker object, but it would not call tsi_handshaker_shutdown. This was still ok with ALTS handshakes because the alts_tsi_handshaker object's dtor ran grpc_call_unref, which cancelled any ongoing handshake RPC and free'd up associated resources.

With this change, the alts_tsi_handshaker dtor no longer runs grpc_call_unref it there is an outstanding call that's waiting for a RECV_STATUS op to complete - instead, it just drops ref and defers full destruction to the RECV_STATUS op completion. This introduced a problem because if the security handshaker failed the subchannel connect attempt but didn't call tsi_handshaker_shutdown, then we wouldn't cancel any ongoing handshake RPC (previously, grpc_call_unref would do this cancellation). So recent changes change the security handshaker to be sure to call tsi_handshaker_shutdown if the security handshaker is failing the handshake attempt and the security handshaker hasn't been shut down yet; the issue could also be fixed perhaps by making the alts_tsi_handshaker dtor cancel any ongoing call, but the change in security handshaker seemed more predictable.

This still ready for review otherwise

Copy link
Copy Markdown
Member

@markdroth markdroth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks pretty good. My comments are fairly minor.

Please make sure @yihuazhang reviews before merging.

Reviewed 4 of 6 files at r1, 2 of 4 files at r2, 2 of 2 files at r3.
Reviewable status: all files reviewed, 4 unresolved discussions (waiting on @apolcyn and @jiangtaoli2016)


src/core/tsi/alts/handshaker/alts_handshaker_client.cc, line 54 at r1 (raw file):

  /* One ref is held by the entity that created this handshaker_client, and
   * another ref is held by the pending RECEIVE_STATUS_ON_CLIENT op. */
  gpr_refcount refs;

@yihuazhang, I would really like to see this code converted to C++, so that we can make this a C++ class that inherits from grpc_core::InternallyRefCounted<>.


src/core/tsi/alts/handshaker/alts_handshaker_client.cc, line 156 at r3 (raw file):

      return;
    }
    bool have_final_result = false;

Could just say:

const bool have_final_result =
    client->pending_recv_message_result->result != nullptr ||
    client->pending_recv_message_result->status != TSI_OK;

src/core/tsi/alts/handshaker/alts_handshaker_client.cc, line 192 at r3 (raw file):

}

static void handle_response_done_locked(alts_grpc_handshaker_client* client,

It looks like this is invoked when not holding the lock, so I think the _locked suffix should be removed.


src/core/tsi/alts/handshaker/alts_handshaker_client.cc, line 300 at r3 (raw file):

    }
  }
  handle_response_done_locked(client,

Do we need to check for the recv_trailing_metadata response in this case? This isn't a failure, so presumably the handshaker's next() will be called again, right?

I'm thinking that when we know this is not going to be the last response (i.e., recv_buffer is not null), it would be more efficient to just invoke the callback directly, so that we don't suffer the overhead of the memory allocation and acquiring the mutex for any but the last read on the stream.


test/core/tsi/alts/handshaker/alts_concurrent_connectivity_test.cc, line 86 at r3 (raw file):

  // The main goal of these tests are to stress concurrent ALTS handshakes,
  // so we prevent subchnannel sharing.
  grpc_arg disable_subchannel_sharing_arg = grpc_channel_arg_integer_create(

Would be clearer written as follows:

std::vector<grpc_arg> new_args;
new_args.push_back(grpc_channel_arg_integer_create(
    const_cast<char*>(GRPC_ARG_USE_LOCAL_SUBCHANNEL_POOL), true));
if (reconnect_backoff_ms != 0) {
  new_args.push_back(grpc_channel_arg_integer_create(
      const_cast<char*>("grpc.testing.fixed_reconnect_backoff_ms"),
      reconnect_backoff_ms));
}
grpc_channel_args* channel_args = grpc_channel_args_copy_and_add(
    nullptr, new_args.data(), new_args.size());

Copy link
Copy Markdown
Contributor Author

@apolcyn apolcyn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: all files reviewed, 4 unresolved discussions (waiting on @apolcyn, @jiangtaoli2016, and @markdroth)


src/core/tsi/alts/handshaker/alts_handshaker_client.cc, line 156 at r3 (raw file):

Previously, markdroth (Mark D. Roth) wrote…

Could just say:

const bool have_final_result =
    client->pending_recv_message_result->result != nullptr ||
    client->pending_recv_message_result->status != TSI_OK;

Done.


src/core/tsi/alts/handshaker/alts_handshaker_client.cc, line 192 at r3 (raw file):

Previously, markdroth (Mark D. Roth) wrote…

It looks like this is invoked when not holding the lock, so I think the _locked suffix should be removed.

Whoops, sorry, this was left over from earlier iteration that used the lock more widely


src/core/tsi/alts/handshaker/alts_handshaker_client.cc, line 300 at r3 (raw file):

Previously, markdroth (Mark D. Roth) wrote…

Do we need to check for the recv_trailing_metadata response in this case? This isn't a failure, so presumably the handshaker's next() will be called again, right?

I'm thinking that when we know this is not going to be the last response (i.e., recv_buffer is not null), it would be more efficient to just invoke the callback directly, so that we don't suffer the overhead of the memory allocation and acquiring the mutex for any but the last read on the stream.

Unfortunately, this my or may not actually be the last response message, i.e. this response message can be the one that completes the handshake (in which case tsi_handshaker_next won't be called again). In a successful handshake, the last response message received from the handshake server contains a HandshakerResult field, which the client forms a tsi_handshaker_result from, and which terminates the handshake.

Note that assumptions this PR makes are:

  1. We won't call tsi_handshaker_next again if we complete the current next callback with a non-null tsi_handshaker_result object
  2. We won't call tsi_handshaker_next again if we complete the current next callback with tsi_result that is not TSI_OK (this scenario would indicate some failure in the handshake RPC)

Now that I'm describing this, I'm realizing there's one thing which I don't think is necessarily a bug but is potentially fragile: is we call back with nullptr tsi_handshaker_resultbut aTSI_INCOMPLETE_DATAstatus code, then we actually <i>aren't</i> done with the handshake; and so doing what this PR is doing, waiting for a status code, would be incorrect. AFAIKTSI_INCOMPLETE_DATA` is not a status code that will be generated by ALTS handshake servers though.

@yihuazhang yihuazhang self-requested a review November 7, 2019 20:37
@yihuazhang
Copy link
Copy Markdown
Contributor

@markdroth Sorry that I could not spend any time on C++'fying this class. I will squeeze it into my Q4 (or Q1) OKR.

Copy link
Copy Markdown
Contributor Author

@apolcyn apolcyn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 5 of 7 files reviewed, 4 unresolved discussions (waiting on @apolcyn, @jiangtaoli2016, @markdroth, and @yihuazhang)


src/core/tsi/alts/handshaker/alts_handshaker_client.cc, line 300 at r3 (raw file):
Re my own comment:

Now that I'm describing this, I'm realizing there's one thing which I don't think is necessarily a bug but is potentially fragile: is we call back with nullptr tsi_handshaker_resultbut aTSI_INCOMPLETE_DATAstatus code, then we actually aren't done with the handshake; and so doing what this PR is doing, waiting for a status code, would be incorrect. AFAIKTSI_INCOMPLETE_DATA` is not a status code that will be generated by ALTS handshake servers though.

I chatted with @jiangtao and @yihuaz offline, and audited the code. We have a guarantee that we will never invoke the tsi_handshaker_next callback with status == TSI_INCOMPLETE_DATA, because:

  1. The ALTS handshaker client code will never generate a TSI_INCOMPLETE_DATA status on its own
    b) There doesn't exist a status that the ALTS handshake server can return that can be converted to TSI_INCOMPLETE_DATA:
    tsi_result alts_tsi_utils_convert_to_tsi_result(grpc_status_code code) {

So I still think that the current code in the PR should be stuck with.

I'm thinking that when we know this is not going to be the last response (i.e., recv_buffer is not null), it would be more efficient to just invoke the callback directly, so that we don't suffer the overhead of the memory allocation and acquiring the mutex for any but the last read on the stream.

Related to above comment about needing further inspection at this point to know whether or not we're done, and if next will be called again - we might be able to do some inspection at this point to do some short circuiting, e.g. check if result == nullptr or check if status != TSI_OK, but the advantage I like of the current code is that the logic of whether or not we're done yet is kept all within maybe_complete_tsi_next... PLMK though

Copy link
Copy Markdown
Member

@markdroth markdroth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good! Only minor comments remaining. Feel free to merge after addressing them.

Reviewed 2 of 2 files at r4.
Reviewable status: all files reviewed, 2 unresolved discussions (waiting on @apolcyn, @jiangtaoli2016, and @yihuazhang)


src/core/tsi/alts/handshaker/alts_handshaker_client.cc, line 300 at r3 (raw file):

Previously, apolcyn wrote…

Re my own comment:

Now that I'm describing this, I'm realizing there's one thing which I don't think is necessarily a bug but is potentially fragile: is we call back with nullptr tsi_handshaker_resultbut aTSI_INCOMPLETE_DATAstatus code, then we actually aren't done with the handshake; and so doing what this PR is doing, waiting for a status code, would be incorrect. AFAIKTSI_INCOMPLETE_DATA` is not a status code that will be generated by ALTS handshake servers though.

I chatted with @jiangtao and @yihuaz offline, and audited the code. We have a guarantee that we will never invoke the tsi_handshaker_next callback with status == TSI_INCOMPLETE_DATA, because:

  1. The ALTS handshaker client code will never generate a TSI_INCOMPLETE_DATA status on its own
    b) There doesn't exist a status that the ALTS handshake server can return that can be converted to TSI_INCOMPLETE_DATA:
    tsi_result alts_tsi_utils_convert_to_tsi_result(grpc_status_code code) {

So I still think that the current code in the PR should be stuck with.

I'm thinking that when we know this is not going to be the last response (i.e., recv_buffer is not null), it would be more efficient to just invoke the callback directly, so that we don't suffer the overhead of the memory allocation and acquiring the mutex for any but the last read on the stream.

Related to above comment about needing further inspection at this point to know whether or not we're done, and if next will be called again - we might be able to do some inspection at this point to do some short circuiting, e.g. check if result == nullptr or check if status != TSI_OK, but the advantage I like of the current code is that the logic of whether or not we're done yet is kept all within maybe_complete_tsi_next... PLMK though

I'm okay with this for now, but this does seem like it adds more overhead than is really necessary. Maybe add a TODO indicating that if this becomes a performance problem, we should find a way to avoid the extra overhead on all but the last message?


src/core/tsi/alts/handshaker/alts_handshaker_client.cc, line 156 at r4 (raw file):

      return;
    }
    bool have_final_result =

This can be const.

Copy link
Copy Markdown
Contributor Author

@apolcyn apolcyn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: 6 of 7 files reviewed, 2 unresolved discussions (waiting on @jiangtaoli2016, @markdroth, and @yihuazhang)


src/core/tsi/alts/handshaker/alts_handshaker_client.cc, line 300 at r3 (raw file):

Previously, markdroth (Mark D. Roth) wrote…

I'm okay with this for now, but this does seem like it adds more overhead than is really necessary. Maybe add a TODO indicating that if this becomes a performance problem, we should find a way to avoid the extra overhead on all but the last message?

Added this TODO


src/core/tsi/alts/handshaker/alts_handshaker_client.cc, line 156 at r4 (raw file):

Previously, markdroth (Mark D. Roth) wrote…

This can be const.

Done.


test/core/tsi/alts/handshaker/alts_concurrent_connectivity_test.cc, line 86 at r3 (raw file):

Previously, markdroth (Mark D. Roth) wrote…

Would be clearer written as follows:

std::vector<grpc_arg> new_args;
new_args.push_back(grpc_channel_arg_integer_create(
    const_cast<char*>(GRPC_ARG_USE_LOCAL_SUBCHANNEL_POOL), true));
if (reconnect_backoff_ms != 0) {
  new_args.push_back(grpc_channel_arg_integer_create(
      const_cast<char*>("grpc.testing.fixed_reconnect_backoff_ms"),
      reconnect_backoff_ms));
}
grpc_channel_args* channel_args = grpc_channel_args_copy_and_add(
    nullptr, new_args.data(), new_args.size());

done (in prior update)

GPR_ASSERT(client->pending_recv_message_result == nullptr);
client->pending_recv_message_result = pending_recv_message_result;
}
if (client->pending_recv_message_result == nullptr) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this branch handle the situation when ALTS handshaker server was unable to handle a client's requests and notify the handshaker client about it?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, if I understand the question: if the ALTS handshake server was unable to handle the client's request and the ALTS handshake server was unable to notify the client about it, then what happens is the following:

  1. pending RECV_MESSAGE op and pending RECV_STATUS_ON_CLIENT op will both hang
  2. After subchannel connection timeout elapses, the subchannel's associated handshake manager will shut down the ongoing security handshake, which will invoke grpc_call_cancel_internal on the ALTS RPC
  3. pending RECV_MESSAGE and RECV_STATUS_ON_CLIENT ops will then both complete
  4. handle_response_done invoke this with a non-null pending_recv_message_result, having a status != TSI_OK
  5. on_status_received will invoke with this with receive_status_finished
  6. We'll then invoke the TSI next callback, and the handshake will be terminated.

Note that the added test in alts_concurrent_connectivity_test.cc: TestHandshakeFailsFastWhenHandshakeServerHangsAfterAccepting, exercises this case.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks much for the explanation. It makes sense to me. Could you please also leave this conversation open for future reference?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

np, will leave this open

@yihuazhang
Copy link
Copy Markdown
Contributor

Hi Alex,

I am trying to understand the motivation of this PR. Does the PR handle the situation when ALTS handshake server was unable to handle handshake clients' requests (due to resource limit exceeding) and thus notify the client about it?

op++;
GPR_ASSERT(op - ops <= kHandshakerClientOpNum);
gpr_ref(&client->refs);
grpc_call_error call_error =
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the client's first handshake message, don't we endup calling TSI next callback twice? One is in on_status_received and the other in on_handshaker_service_resp_recv?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The client's first handshake message response, in successful case, should result in a received message having no result field, and an OK status. In this case, maybe_complete_tsi_next will not wait for the RECV_STATUS op to complete.

Eventually, the last handshaker message will arrive, having at least either non-OK status or a filled in result field. In this case, we'll wait for the handshake RPC's status to be received, and then invoke the TSI next callback just once.

Lastly, note that if the RECV_STATUS op completes while we don't have a TSI next callback outstanding (e.g. security handshaker is waiting for a read from remote peer), then we won't have a non-null pending_recv_message_result, and so we won't accidentally invoke the TSI next callback an extra time

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the last handshaker message, why do we wait for the handshake RPC's status to be received?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please also leave this conversation open.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the last handshaker message, why do we wait for the handshake RPC's status to be received?

My motivation for this is to mainly just to try to be sure that we continue to drive the handshake RPC forward until it fully completes. This is basically trying to leverage the fact that there is an outstanding TSI next call (which ultimately suggests that the associated subchannel hasn't yet connected, and that someone should be polling on the relevant file descriptors of this handshake RPC), to make sure that we finish the handshake RPC in a timely manner.

In a degenerate case, suppose that the handshake server sends a reply message, and so the RECV_MESSAGE op completes, but the data on the wire hasn't yet arrived to complete the RECV_STATUS op. If we completed the TSI handshake at this point, then the subchannel could become ready, the application could quickly complete its RPC, and then the application could stop polling on the completion queue of its RPC (if it e.g. only wanted to do a single RPC). The data necessary to complete the RECV_STATUS op on the wire could then show up just after, but because we don't have anyone guaranteed to be polling on the relevant file descriptors of the handshake RPC (except for the optional backup poller, which runs on a fixed time interval), the handshake RPC may just hang there and never "complete".

I.e., this provides a kind of guarantee that all handshake RPCs are either driven forward until full completion in a timely manner, or be cancelled (due to handshake failure or subchannel connection timeout).

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks much for the explanation. It makes sense to me.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In xds_cleint.cc (https://github.com/grpc/grpc/blob/master/src/core/ext/filters/client_channel/xds/xds_client.cc), we also applied the similar model, i.e., scheduling a grpc_closure upon receiving message RECV_MESSAGE and status RECV_STATUS.

@apolcyn
Copy link
Copy Markdown
Contributor Author

apolcyn commented Nov 8, 2019

I am trying to understand the motivation of this PR. Does the PR handle the situation when ALTS handshake server was unable to handle handshake clients' requests (due to resource limit exceeding) and thus notify the client about it?

The immediate motivation is to prepare for #20722

There is a specific case in #20722 PR (the global queue), that I'm using the RECV_STATUS op to handle:

Specifically, that PR moves the global handshake RPC queue forward as follows:

  • when making a new handshake RPC, first check if there's space outstanding in the queue, and enqueue if there isn't
  • when completing a handshake RPC, check if there's any handshake waiting for an available "outstanding handshake RPC" slot to become available, and start it if so.

So if the security handshaker is e.g. waiting for a read from the peer, while not having an outstanding TSI next call, then we still need a way to move the queue forward after we know that the current handshake RPC is done, the RECV_STATUS op's completion is IMO a convenient way to do this.

Also, #20722 aside, the handshake RPC is the currently special in that it's the only RPC that I know of that doesn't use the RECV_STATUS op. This is likely error prone due to being abnormal usage of the core call API (FWIW I believe this also technically not conforming to C-core API rules - https://github.com/grpc/grpc/blob/master/include/grpc/impl/codegen/grpc_types.h#L572).

alts_handshaker_client base;
/* One ref is held by the entity that created this handshaker_client, and
* another ref is held by the pending RECEIVE_STATUS_ON_CLIENT op. */
gpr_refcount refs;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So I believe by having this refs, we can correctly handle the situation when alts_tsi_handshaker gets destroyed while we still have a pending RECEIVE_STATUS_ON_CLIENT op?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Exactly, correct

const bool have_final_result =
client->pending_recv_message_result->result != nullptr ||
client->pending_recv_message_result->status != TSI_OK;
if (have_final_result && !client->receive_status_finished) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please add some comments about this line? e.g., it returns when 1) handshake successfully completes (client->pending_recv_message_result->result != nullptr) or 2) handshake fails (client->pending_recv_message_result->status != TSI_OK) or client has not received response on client's RECV_STATUS_ON_CLIENT.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems that we will not run TSI next callback when 1) Handshake completes and succeeds (client->pending_recv_message_result->result != nullptr) or handshake fails (client->pending_recv_message_result->status != TSI_OK) and 2) we do not receive the response on client's RECV_STATUS_ON_CLIENT op. Is it because if handshake server returns response for RECV_MESSAGE, it will not return for RECV_STATUS_ON_CLIENT? Also, if the handshake fails, should not we also invoke the TSI next callback by feeding in corresponding error status?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please ignore my second question. I think your response to my questions below (i.e., calling TSI next callback twice) answers it :)

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please add some comments about this line? e.g., it returns when 1) handshake successfully completes (client->pending_recv_message_result->result != nullptr) or 2) handshake fails (client->pending_recv_message_result->status != TSI_OK) or client has not received response on client's RECV_STATUS_ON_CLIENT.

Yes, we return early here if "handshake successfully completes or handshake fails" and "not received status'". I added this comment

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, if the handshake fails, should not we also invoke the TSI next callback by feeding in corresponding error status?

I do think it would make sense to override an otherwise-OK TSI next status with an error if the RECV_STATUS op indicates that the handshake RPC has failed. I've left a TODO for this though; if doing this, propose saving it for another PR.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SGTM

@yihuazhang
Copy link
Copy Markdown
Contributor

LGTM. Please feel free to merge after addressing my last comment on "why waiting for the handshake RPC's status to be received for the last handshake message".

@apolcyn
Copy link
Copy Markdown
Contributor Author

apolcyn commented Nov 9, 2019

Thanks for the detailed reviews @markdroth and @yihuazhang . Will merge on monday.

@apolcyn
Copy link
Copy Markdown
Contributor Author

apolcyn commented Nov 11, 2019

Squashed down commits into one.

Second commit is a trivial post-review change to change a CLOSURE_RUN to a Closure::Run

@apolcyn
Copy link
Copy Markdown
Contributor Author

apolcyn commented Nov 11, 2019

Test failures:

objc_macos_opt_native: #20686
Bazel RBE ASAN C++: #15981
Basic Tests C/C++ Macos: #20694
Bazel RBE MSAN C/C++: #21013
Bazel RBE Opt C/C++: #20759 (xds_end2end_test exit code 142)
Interop cloud to prod failed with: fatal: unable to access 'https://go.googlesource.com/text/': The requested URL returned error: 500
RBE Windows Debug C/C++: "Tool Failed" RBE failure
UBSAN C/C++: #21014

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

area/security lang/core release notes: no Indicates if PR should not be in release notes

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants