Merged
Conversation
Each last cache holds a ring buffer for each column in an index map, which preserves the insertion order for faster record batch production. The ring buffer uses a custom type to handle the different supported data types that we can have in the system.
LastCacheProvider is the API used to create last caches and write table batches to them. It uses a two-layer RwLock/HashMap: the first for the database, and the second layer for the table within the database. This allows for table-level locks when writing in buffered data, and only gets a database-level lock when creating a cache (and in future, when removing them as well).
Added basic APIs on the write buffer to access the last cache and then a test to the last_cache module to see that it works with a simple example
Addressed three parts of PR feedback: 1. Remove double-lock on cache map 2. Re-order the get when writing to the cache to be outside the loop 3. Move the time check into the cache itself
This refactors the last cache to use a nested caching structure, where the key columns for a given cache are used to create a hierarchy of nested maps, terminating in the actual store for the values in the cache. Access to the cache is done via a set of predicates which can optionally specify the key column values at any level in the cache hierarchy to only gather record batches from children of that node in the cache. Some todos: - Need to handle the TTL - Need to move the TableProvider impl up to the LastCache type
This re-writes the datafusion TableProvider implementation on the correct type, i.e., the LastCache, and adds conversion from the filter Expr's to the Predicate type for the cache.
Last caches will have expired entries walked when writes come in.
Changed key columns so that they do not accept null values, i.e., rows that are pushed that are missing key column values will be ignored. When producing record batches for a cache, if not all key columns are used in the predicate, then this change makes it so that the non-predicate key columns are produced as columns in the outputted record batches. A test with a few cases showing this was added.
Ensure key columns in the last cache that are not included in the predicate are emitted in the RecordBatches as a column. Cleaned up and added comments to the new test.
Added two tests, as per commit title. Also moved the eviction process to a separate function so that it was not being done on every write to the cache, which could be expensive, and this ensures that entries are evicted regardless of whether writes are coming in or not.
CacheAlreadyExists errors were only being based on the database and table names, and not including the cache names, which was not correct.
This also adds explicit support for series key columns to distinguish them from normal tags in terms of nullability A test was added to check nulls work
Support the addition of new fields to the last cache, for caches that do not have a specified set of value columns. A test was added along with the changes.
Created a new module for the DataFusion table function implementations. The TableProvider impl for LastCache was moved there, and new code that implements the TableFunctionImpl trait to make the last cache queryable was also written. The LastCacheProvider and LastCache were augmented to make this work: - The provider stores an Arc<LastCache> instead of a LastCache - The LastCache uses interior mutability via an RwLock, to make the above possible.
The server used to accept a socket address and bind it directly, returning error if the bind fails. This commit changes that so the ServerBuilder accepts a TcpListener. The behaviour is essentially the same, but this allows us to bind the address from tests when instantiating the server, so we can easily assign unused ports. Tests in the influxdb3_server were updated to exploit this in order to use port 0 auto assignment and stop flaky test failures. A new, failing, test was also added to that module for the last cache.
Committing here as the last cache is in a working state, but it is naively implemented as it just stores all key columns again (still with the hierarchy)
Contributor
Author
|
I realized that there is one bit of behaviour that does not make sense (I commented on this here): If a cache is created with identical parameters to another, with the exception that it has a different name, then the result will be two identical caches (that have different names). This should instead be an error. I will open up a follow-on PR to address this. |
pauldix
approved these changes
Jul 13, 2024
Member
pauldix
left a comment
There was a problem hiding this comment.
Looks good, love the integration tests!
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Closes #25096
mimecrate, for parsing request MIME types, but this is only used in the code I added - we may adopt it in other APIs / parts of the HTTP server in future PRs