-
Notifications
You must be signed in to change notification settings - Fork 2.5k
[Feature Request] GRPC Performance Improvements #18291
Copy link
Copy link
Closed
Labels
Search:PerformanceenhancementEnhancement or improvement to existing feature or requestEnhancement or improvement to existing feature or request
Description
Is your feature request related to a problem? Please describe
Per the client-server GRPC benchmarks for search and bulk, compared to HTTP, there is room for improvement in the response conversion latency in GRPC.
Describe the solution you'd like
This issue captures potential latency reduction techniques to explore.
- Try some optimizations in the the opensearch proto conversion code. e.g.
a) Pass all objects in response side by reference instead of making copies
b) Use ThreadLocal for newBuilder() calls - will improve first request latency but not subsequent ones - so this is not a significant change - Try to see what protobuf types (e.g. enums, oneofs, maps, arrays, the optional keyword, etc) are most expensive to construct and try to reduce usages of them in the protobuf schema. https://www.bytesizego.com/blog/grpc-performance
- Consider using Protobuf for node-to-node to avoid POJO-> Proto conversion
- Consider implementing GRPC streaming API to chunk the documents in the responses for better customer experience
Techniques to track down response latency:
- Taking Flamegraphs for response side to see what method(s) are taking longest
- Adding response-side metrics to breakdown response latency further
Performance metrics to track
- Latency
- CPU usage
- Memory usage
Related component
Search:Performance
Describe alternatives you've considered
No response
Additional context
No response
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
Search:PerformanceenhancementEnhancement or improvement to existing feature or requestEnhancement or improvement to existing feature or request
Type
Projects
Status
✅ Done
Status
Done/Won't Do