High performance getledgerentry#4350
Conversation
4836bb2 to
a81b48a
Compare
|
I've now added a batch load endpoint called
If a ledgerSeq is queried but is not available, the return payload is as follows: |
|
I know this is just a prototype, but it would be more ergonomic to send JSON in the POST body.. Also, from the example response it seems like {
"entries": [
{"key": "Base64-LedgerKey", "state": "dead"}, // dead entry
{"key": "Base64-LedgerKey", "entry": "Base64-LedgerEntry", "state": "live"}, // live entry
],
"ledger": ledgerSeq
}Regarding: {"ledger": ledgerSeq, "state": "not_found"}Could you simply use a 404 HTTP status code instead? |
|
Additionally, how can I distinguish TTL'ed entries? Are TTL'ed entries the ones with To be clear, what I need is a way to implement SnapshotSourceWithArchive from the endpoint you provide. |
If an entry has been evicted, it will be reported as DEAD. If the entry is expired but not evicted, it will be returned as LIVE. This is just a raw key-value lookup that doesn’t enforce TTLs. To determine if a key is dead or not, you’ll need to load both the entry key and the TTL key. Here, live means “the key exists on the BucketList” and dead means “key does not exist on the BucketList” and is unrelated to TTL. I believe this is the same interface as your get_including_archived. |
|
Ah, ok, so live means not evicted (but possibly expired), and I need to query TTL entries separately (the more reason to have a batch endpoint) |
|
If Then we can get rid of the state field altogether (since present entries are implicitly live) |
a81b48a to
1dfcd9f
Compare
|
What is the maximum number of ledger entry keys this endpoint could accept? |
I haven't tested the maximum entries for a single query. However I doubt it will be a limiting factor, given we achieved a request rate of 20k RPS with an average latency of 548.105 us for point loads, and bulk loads are more efficient. |
|
@SirTyson We currently support 200 keys which is more than sufficient. |
8e801ee to
c1c7dbf
Compare
c1c7dbf to
486f145
Compare
|
@janewang this endpoint will not be directly exposed to clients. This is the backend that rpc will call, so if more encodings or other protocols/semantics need to be supported in clients this would be done in rpc not core. |
|
LGTM, could you please rebase and squash? |
155dfa9 to
6341749
Compare
Done |
Shaptic
left a comment
There was a problem hiding this comment.
Bit of a uninformed drive-by review but I wanted to try to understand the new behavior
|
All your comments should be addressed @Shaptic |
6341749 to
1cdd7aa
Compare
1cdd7aa to
86cf443
Compare
b51449f to
de2c587
Compare
Description
Resolves #4306
Note: the following interface is now outdated. Please refer to
docs/software/commands.mdfor the up to date interface. The performance measurements are still accurate.Previous interface:
getledgerentrycore endpoint is now high performance, non blocking and served by a multi-threaded HTTP server that does not interact with the main thread. This enables down stream systems to query this endpoint at high rates withoutcaptive-corenodes losing sync. Note that this endpoint is served by a different port and separated from stellar-core's other endpoints. The following config options have been added supporting this feature:The HTTP request string is as follows:
keyis required, and is the Base64 XDR of the LedgerKey being queried.ledgerSeqis optional. If not set, stellar-core will return the LedgerEntry based on the most recent ledger. IfledgerSeqis set, stellar-core will return an entry based on a historical ledger snapshot at the given ledger. The return payload is a JSON object in the following format:ledgeris the ledgerSeq that the query is based on, and is always returned.statereturns "live" if a live LedgerEntry was found, or "dead" if the LedgerEntry does not exist. Additionally, ifledgerSeqis set to a snapshot that stellar-core does not currently have, "not_found" is returned. Finally, ifstate==live, "entry" is returned with the Base64 XDR encoding of the full LedgerEntry.To measure performance, I used a parallel go script (thanks @Shaptic) with
stellar/go/clients/stellarcoreto send requests at a very high rate over local host. 1 million LedgerEntries of type ContractCode, ContractData, and Trustline were randomly sampled from the BucketList (such that all entries exist) for these requests, and no caching was used. Test was ran ontest-core-003a.dev.stellar002with acaptive-coreinstance in sync with pubnet with the following benchmarks:Checklist
clang-formatv8.0.0 (viamake formator the Visual Studio extension)