Skip to content

rls: add API for response caching.#15483

Closed
htuch wants to merge 4 commits intoenvoyproxy:mainfrom
htuch:rls-cache
Closed

rls: add API for response caching.#15483
htuch wants to merge 4 commits intoenvoyproxy:mainfrom
htuch:rls-cache

Conversation

@htuch
Copy link
Copy Markdown
Member

@htuch htuch commented Mar 14, 2021

Initial PR to address #14299.

Signed-off-by: Harvey Tuch htuch@google.com

Initial PR to address envoyproxy#14299.

Signed-off-by: Harvey Tuch <htuch@google.com>
@repokitteh-read-only
Copy link
Copy Markdown

CC @envoyproxy/api-shepherds: Your approval is needed for changes made to api/envoy/.
API shepherd assignee is @mattklein123
CC @envoyproxy/api-watchers: FYI only for changes made to api/envoy/.

🐱

Caused by: #15483 was opened by htuch.

see: more, trace.

@htuch
Copy link
Copy Markdown
Member Author

htuch commented Mar 14, 2021

CC @yanavlasov @hzxuzhonghu @envoyproxy/api-shepherds

Copy link
Copy Markdown
Member

@mattklein123 mattklein123 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks this LGTM with one small comment.

/wait

//
// If quota is available, a RLS request will not be made and the quota will be reduced by 1.
//
// If there is not sufficient quota, a RLS request will be made. When the result is returned,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't this mean that every Nth request will have higher latency due to requiring a round-trip with the RLS server? It seems like we should try to set things up such that the client already has the next quota before it actually needs it.

One option here would be some sort of pre-fetching mechanism, where the client can fetch a new quota when the old quota reaches some minimum number of remaining requests.

Another option would be a streaming API, where the server could update the client's quota whenever it changes without the client needing to explicitly make a request. In fact, couldn't the rate limit quota just be another xDS resource type that the client subscribes to?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the client can optionally make a preemptive RLS request before quota expires. So, this is actually at the discretion of implementation.

@yanavlasov yanavlasov self-assigned this Mar 22, 2021
Signed-off-by: Harvey Tuch <htuch@google.com>
@mattklein123
Copy link
Copy Markdown
Member

Is this ready for non-draft review?

/wait-any

@htuch htuch changed the title [WiP] rls: add API for response caching. rls: add API for response caching. Apr 9, 2021
Signed-off-by: Harvey Tuch <htuch@google.com>
@htuch htuch marked this pull request as ready for review April 9, 2021 00:44
Copy link
Copy Markdown
Member

@mattklein123 mattklein123 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks LGTM with small clarification request.

/wait

Comment on lines +149 to +151
// have no cached entry. This will trigger a new RLS request.
// When the result is returned, a single unit will be consumed from the quota for all
// matching descriptors.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if the new request doesn't get any additional quota? Can you clarify? Same in next item?

Copy link
Copy Markdown
Member Author

@htuch htuch May 10, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@yanavlasov will take this PR over as he's going to be owning the Google Envoy RLS efforts here going forward.

@github-actions
Copy link
Copy Markdown

This pull request has been automatically marked as stale because it has not had activity in the last 30 days. It will be closed in 7 days if no further activity occurs. Please feel free to give a status update now, ping for review, or re-open when it's ready. Thank you for your contributions!

@github-actions github-actions bot added the stale stalebot believes this issue/PR has not been touched recently label May 10, 2021
@github-actions
Copy link
Copy Markdown

This pull request has been automatically closed because it has not had activity in the last 37 days. Please feel free to give a status update now, ping for review, or re-open when it's ready. Thank you for your contributions!

@github-actions github-actions bot closed this May 18, 2021
@yanavlasov yanavlasov reopened this May 18, 2021
@yanavlasov yanavlasov added no stalebot Disables stalebot from closing an issue and removed stale stalebot believes this issue/PR has not been touched recently labels May 18, 2021
@yanavlasov
Copy link
Copy Markdown
Contributor

Superseded by PR #16942

@yanavlasov yanavlasov closed this Jun 11, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

api no stalebot Disables stalebot from closing an issue waiting

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants