Skip to content

draft of client policy HTTPRoute router#2029

Merged
hawkw merged 12 commits intoeliza/client-policy-apifrom
eliza/route-matching
Dec 6, 2022
Merged

draft of client policy HTTPRoute router#2029
hawkw merged 12 commits intoeliza/client-policy-apifrom
eliza/route-matching

Conversation

@hawkw
Copy link
Contributor

@hawkw hawkw commented Dec 1, 2022

This branch adds a rough draft of a router for selecting client policy routes. Right now, once a route is selected, it just gets logged as a demo that route matching works. A follow-up PR will implement HTTPRoute traffic splitting once a route is matched.

Depends on #2021

@hawkw hawkw requested a review from a team as a code owner December 1, 2022 18:09
hawkw added a commit that referenced this pull request Dec 2, 2022
This branch refactors the prototype client policy code to move the
discovery of client policies to still occur after profile discovery, but
before the HTTP logical stack. This means that SO_ORIGINAL_DST addresses
need not be stored in the `Logical` target type any longer, which I
think is significantly nicer, as we no longer have a `Logical` field
that's ignored by hashing and equality.

Instead, the client policy discovery now occurs in the `switch_logical`
stack. This is also not my _favorite_ for it, because now that stack is
responsible for more than just switching based on whether or not a
logical destination exists...but maybe the solution is just to rename
that stack.

However, with the current structure of the outbound proxy, if we want
`push_switch_logical` to push a stack that outptus a `Logical` target,
client policy discovery has to occur there, because we would want to
perform client policy lookups prior to constructing the `Logical` target
type, while we still have an `OrigDstAddr`. Alternatively, we could have
that stack output a different target type, and push a client policy
discovery stack that consumes that target type and constructs a
`Logical` target, but this felt like the simplest approach that didn't
require propagating the `OrigDstAddr` via the `Logical` target (like we
were doing previously).

Depends on #2029
@hawkw hawkw marked this pull request as draft December 2, 2022 20:32
Base automatically changed from eliza/backendrefs to eliza/client-policy-api December 6, 2022 17:05
@hawkw hawkw marked this pull request as ready for review December 6, 2022 17:05
@hawkw hawkw merged commit 9249b7e into eliza/client-policy-api Dec 6, 2022
@hawkw hawkw deleted the eliza/route-matching branch December 6, 2022 17:32
hawkw added a commit that referenced this pull request Dec 6, 2022
Depends on #2029

This branch builds on #2021 and #2029 to implement traffic splitting
based on client policy HTTPRoute backends. This is everything necessary
to implement header-based routing using HTTPRoutes, by creating a route
that matches a header and routes to a different `backendRef`. In
addition, this means that we can also now do other forms of traffic
splitting (such as weights) with a HTTPRoute, as well.

The implementation here is pretty straightforward. Currently, the
traffic split middleware that's used for ServiceProfile traffic splits
(and for the top-level list of backends in a client policy lookup, which
are currently not used by the control plane) requires dynamically
updating the split service when the set of backends in the traffic split
changes. In the per-HTTPRoute case, however, this is not necessary, as
the set of HTTPRoutes is being watched by the client policy router, and
the whole stack for a route will be torn down and rebuilt if that
route's definition changes. Therefore, I've factored out the fixed
component of the traffic split (just shifting traffic across a weighted
distribution of services) from the dynamically updating component, so
that the client policy HTTPRoute traffic split can just build and
destroy fixed split middlewares. The dynamically changing traffic split
middleware is now implemented by mutating an inner fixed split
middleware.
hawkw added a commit that referenced this pull request Dec 6, 2022
This branch refactors the prototype client policy code to move the
discovery of client policies to still occur after profile discovery, but
before the HTTP logical stack. This means that SO_ORIGINAL_DST addresses
need not be stored in the `Logical` target type any longer, which I
think is significantly nicer, as we no longer have a `Logical` field
that's ignored by hashing and equality.

Instead, the client policy discovery now occurs in the `switch_logical`
stack. This is also not my _favorite_ for it, because now that stack is
responsible for more than just switching based on whether or not a
logical destination exists...but maybe the solution is just to rename
that stack.

However, with the current structure of the outbound proxy, if we want
`push_switch_logical` to push a stack that outptus a `Logical` target,
client policy discovery has to occur there, because we would want to
perform client policy lookups prior to constructing the `Logical` target
type, while we still have an `OrigDstAddr`. Alternatively, we could have
that stack output a different target type, and push a client policy
discovery stack that consumes that target type and constructs a
`Logical` target, but this felt like the simplest approach that didn't
require propagating the `OrigDstAddr` via the `Logical` target (like we
were doing previously).

Depends on #2029
hawkw added a commit that referenced this pull request Dec 6, 2022
This branch refactors the prototype client policy code to move the
discovery of client policies to still occur after profile discovery, but
before the HTTP logical stack. This means that SO_ORIGINAL_DST addresses
need not be stored in the `Logical` target type any longer, which I
think is significantly nicer, as we no longer have a `Logical` field
that's ignored by hashing and equality.

Instead, the client policy discovery now occurs in the `switch_logical`
stack. This is also not my _favorite_ for it, because now that stack is
responsible for more than just switching based on whether or not a
logical destination exists...but maybe the solution is just to rename
that stack.

However, with the current structure of the outbound proxy, if we want
`push_switch_logical` to push a stack that outptus a `Logical` target,
client policy discovery has to occur there, because we would want to
perform client policy lookups prior to constructing the `Logical` target
type, while we still have an `OrigDstAddr`. Alternatively, we could have
that stack output a different target type, and push a client policy
discovery stack that consumes that target type and constructs a
`Logical` target, but this felt like the simplest approach that didn't
require propagating the `OrigDstAddr` via the `Logical` target (like we
were doing previously).

Depends on #2029
hawkw added a commit that referenced this pull request Dec 6, 2022
This branch adds a rough draft of a router for selecting client policy
routes. Right now, once a route is selected, it just gets logged as a
demo that route matching works. A follow-up PR will implement HTTPRoute
traffic splitting once a route is matched.

Depends on #2021
hawkw added a commit that referenced this pull request Dec 6, 2022
This branch adds a rough draft of a router for selecting client policy
routes. Right now, once a route is selected, it just gets logged as a
demo that route matching works. A follow-up PR will implement HTTPRoute
traffic splitting once a route is matched.

Depends on #2021
hawkw added a commit that referenced this pull request Dec 6, 2022
Depends on #2029

This branch builds on #2021 and #2029 to implement traffic splitting
based on client policy HTTPRoute backends. This is everything necessary
to implement header-based routing using HTTPRoutes, by creating a route
that matches a header and routes to a different `backendRef`. In
addition, this means that we can also now do other forms of traffic
splitting (such as weights) with a HTTPRoute, as well.

The implementation here is pretty straightforward. Currently, the
traffic split middleware that's used for ServiceProfile traffic splits
(and for the top-level list of backends in a client policy lookup, which
are currently not used by the control plane) requires dynamically
updating the split service when the set of backends in the traffic split
changes. In the per-HTTPRoute case, however, this is not necessary, as
the set of HTTPRoutes is being watched by the client policy router, and
the whole stack for a route will be torn down and rebuilt if that
route's definition changes. Therefore, I've factored out the fixed
component of the traffic split (just shifting traffic across a weighted
distribution of services) from the dynamically updating component, so
that the client policy HTTPRoute traffic split can just build and
destroy fixed split middlewares. The dynamically changing traffic split
middleware is now implemented by mutating an inner fixed split
middleware.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant