Skip to content

perf: improved compression with lsm vectors#2911

Merged
lvca merged 2 commits intomainfrom
lsm-vector-ultra-compressed
Dec 11, 2025
Merged

perf: improved compression with lsm vectors#2911
lvca merged 2 commits intomainfrom
lsm-vector-ultra-compressed

Conversation

@lvca
Copy link
Contributor

@lvca lvca commented Dec 11, 2025

each entry is not 17 bytes anymore, but variable. Also got rid of page header, because LSM are immutable only. Pointers are now absolute pointers.

What does this PR do?

A brief description of the change being made with this pull request.

Motivation

What inspired you to submit this pull request?

Related issues

A list of issues either fixed, containing architectural discussions, otherwise relevant
for this Pull Request.

Additional Notes

Anything else we should know when reviewing?

Checklist

  • I have run the build using mvn clean package command
  • My unit tests cover both failure and success scenarios

each entry is not 17 bytes anymore, but variable. Also got rid of page header, because LSM are immutable only. Pointers are now absolute pointers.
@lvca lvca requested a review from Copilot December 11, 2025 15:42
@lvca lvca self-assigned this Dec 11, 2025
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @lvca, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the performance and storage efficiency of LSM vector indexes. It introduces a new variable-sized encoding for index entries and transitions to using absolute file offsets for data location, eliminating fixed-size overheads and pointer tables in immutable pages. These changes result in better data compression, reduced memory usage, and potentially faster access times by streamlining how vector metadata is stored and retrieved.

Highlights

  • Variable-Sized Entry Encoding: Vector index entries (vector ID, bucket ID, and position) are no longer fixed at 17 bytes but are now stored using variable-sized encoding. This significantly improves compression by using only the necessary bytes for each value, reducing overall storage footprint.
  • Elimination of Page Headers and Pointer Tables for Immutable Pages: For immutable (compacted) LSM vector index pages, the overhead of page headers and pointer tables has been removed. Entries are now stored sequentially, which further enhances compression and simplifies data access by allowing direct reads without indirection.
  • Transition to Absolute File Offsets: Vector entry locations are now tracked using a single absoluteFileOffset (long) instead of separate pageNum and pageOffset fields. This change simplifies location management, potentially speeding up data retrieval by providing direct pointers to data within the file.
  • Refactored Page Write Operations: The MutablePage class's write methods (writeLong, writeInt, writeByte, writeNumber) have been updated to return the number of bytes written. This change is crucial for accurately managing offsets when dealing with variable-sized entries and sequential storage.
  • Improved Compaction Process: The compaction logic has been updated to leverage variable-sized entries and absolute file offsets. The LSMVectorIndexCompactor now processes entries sequentially, tracks absolute file offsets, and no longer stores vector data in VectorEntryData during compaction, as vectors are loaded from documents.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@mergify
Copy link
Contributor

mergify bot commented Dec 11, 2025

🧪 CI Insights

Here's what we observed from your CI run for 1243a90.

🟢 All jobs passed!

But CI Insights is watching 👀

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a significant refactoring of how vector index entries are stored and accessed on disk, moving from fixed-size entries with pointer tables to variable-sized entries stored sequentially. The MutablePage class now returns the number of bytes written for various write operations, enabling callers to track pos more accurately. The LSMVectorIndex and LSMVectorIndexCompacted classes have been updated to reflect this change, eliminating pointer tables and calculating entry sizes dynamically using variable-sized encoding (Binary.getNumberSpace). Consequently, the VectorLocationIndex and related data structures now store and use absoluteFileOffset (a long) instead of pageNum and pageOffset for direct, efficient access to entries. This change also includes logic to validate offsetFreeContent during writes to detect and handle old-format or corrupted pages. Review comments primarily focused on replacing direct System.out.println calls with LogManager.instance().log for consistent logging practices within the codebase.

Comment on lines +1124 to +1129
System.out.println("loadVectorsFromPages DONE: Loaded " + vectorIndex.size() + " vector locations (" + entriesRead
+ " total entries) for index: " + indexName
+ ", nextId=" + nextId.get() + ", fileId=" + getFileId() + ", totalPages=" + getTotalPages() +
(compactedSubIndex != null ?
", compactedFileId=" + compactedSubIndex.getFileId() + ", compactedPages=" + compactedSubIndex.getTotalPages() :
""));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Direct use of System.out.println should be avoided in library code. Please use the project's logging framework (LogManager) for logging messages. This provides better control over log levels and output destinations, and is consistent with the logging practices elsewhere in the codebase.

      LogManager.instance().log(this, Level.INFO, "loadVectorsFromPages DONE: Loaded %d vector locations (%d total entries) for index: %s, nextId=%d, fileId=%d, totalPages=%d%s",
          vectorIndex.size(), entriesRead, indexName, nextId.get(), getFileId(), getTotalPages(),
          (compactedSubIndex != null ?
              ", compactedFileId=" + compactedSubIndex.getFileId() + ", compactedPages=" + compactedSubIndex.getTotalPages() :
              ""));

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot please fix it

Comment on lines 1226 to 1228
System.out.println(
"loadVectorsFromFile DONE: fileId=" + fileId + ", entriesRead=" + entriesRead + ", pagesWithEntries=" + pagesWithEntries);
return entriesRead;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Direct use of System.out.println should be avoided in library code. Please use the project's logging framework (LogManager) for logging messages. This provides better control over log levels and output destinations.

    LogManager.instance().log(this, Level.INFO, "loadVectorsFromFile DONE: fileId=%d, entriesRead=%d, pagesWithEntries=%d", fileId, entriesRead, pagesWithEntries);
    return entriesRead;

Copy link
Contributor

Copilot AI commented Dec 11, 2025

@lvca I've opened a new pull request, #2912, to work on those changes. Once the pull request is ready, I'll request review from you.

@lvca lvca merged commit 5a938f6 into main Dec 11, 2025
5 of 7 checks passed
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR improves compression for LSM vector indexes by migrating from fixed 17-byte entries to variable-sized encoding using Binary.getNumberSpace(). The changes eliminate pointer tables and use absolute file offsets instead of page-relative offsets for direct O(1) access. Entry size is now calculated per entry, with small IDs/positions using fewer bytes. Additionally, debug logging was migrated from System.out.println to LogManager for consistency.

Key changes:

  • Variable-sized encoding for vector IDs, bucket IDs, and positions using Binary.getNumberSpace()
  • Absolute file offsets replace page number + page offset pairs for direct access
  • Pointer tables removed; entries now written sequentially for space efficiency
  • Debug logging migrated from System.out.println to LogManager.instance().log()

Reviewed changes

Copilot reviewed 7 out of 7 changed files in this pull request and generated 7 comments.

Show a summary per file
File Description
VectorLocationIndex.java Updated VectorLocation to use absoluteFileOffset (long) instead of pageNum/pageOffset, supporting direct file access
LSMVectorIndexCompactor.java Migrated to variable-sized entry parsing, removed pointer table logic, replaced System.out.println with LogManager logging
LSMVectorIndexCompacted.java Implemented variable-sized encoding for compaction writes, removed pointer tables, sequential entry storage
LSMVectorIndex.java Updated all entry reading/writing to use variable-sized encoding and absolute file offsets throughout
LSMTreeIndexMutable.java Refactored to use return values from write methods for cleaner code
LSMTreeIndexCompacted.java Refactored to use return values from write methods for cleaner code
MutablePage.java Modified write methods (writeInt, writeLong, writeByte, writeNumber, writeUnsignedInt) to return the number of bytes written

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@@ -176,81 +175,59 @@ public CompactionAppendResult appendDuringCompaction(MutablePage currentPage,
headerSize = getHeaderSize(pageNum);
Copy link

Copilot AI Dec 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When creating a new page due to insufficient space, the currentFileOffset is not updated to reflect the new page boundary. After line 168 creates a new page, currentFileOffset should be set to: pageNum * pageSize + BasePage.PAGE_HEADER_SIZE + headerSize (where pageNum and headerSize are the values for the new page after line 174-175). Without this update, all subsequent entries in the new page will have incorrect absolute file offsets that don't match their actual physical locations in the file.

Suggested change
headerSize = getHeaderSize(pageNum);
headerSize = getHeaderSize(pageNum);
// Update currentFileOffset to reflect new page boundary
currentFileOffset.set(pageNum * pageSize + BasePage.PAGE_HEADER_SIZE + headerSize);

Copilot uses AI. Check for mistakes.

MutablePage currentPage = null;
final AtomicInteger compactedPageSeries = new AtomicInteger(0);
final AtomicLong currentFileOffset = new AtomicLong(0);
Copy link

Copilot AI Dec 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The currentFileOffset is initialized to 0, but this doesn't account for the page structure. When entries are written to compacted pages, their absolute file offsets should include the page number * pageSize + BasePage.PAGE_HEADER_SIZE + headerSize. Starting at 0 means the first entry will be recorded with offset 0, but when reading it back (see LSMVectorIndex.java:660, 726, 1192), the code calculates offsets differently. The currentFileOffset should either be initialized to account for the first page's structure, or the appendDuringCompaction method needs to properly manage this offset when pages are created.

Suggested change
final AtomicLong currentFileOffset = new AtomicLong(0);
final AtomicLong currentFileOffset = new AtomicLong(BasePage.PAGE_HEADER_SIZE + compactedIndex.getHeaderSize());

Copilot uses AI. Check for mistakes.
Comment on lines +51 to +52
// Base header size (without page 0 metadata): offsetFree(4) + count(4) + mutable(1) + series(4) = 13
private static final int BASE_HEADER_SIZE = 4 + 4 + 1 + 4;
Copy link

Copilot AI Dec 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment states the base header size is 13 bytes (offsetFree(4) + count(4) + mutable(1) + series(4)), but this conflicts with LSMVectorIndex.HEADER_BASE_SIZE which is 9 bytes (offsetFreeContent(4) + numberOfEntries(4) + mutable(1)). The BASE_HEADER_SIZE here is actually 13 bytes because it includes the series field (4 bytes), which is specific to compacted pages. The comment and constant are correct for this class, but the naming BASE_HEADER_SIZE could be confusing since it differs from the main index's HEADER_BASE_SIZE. Consider renaming to COMPACTED_HEADER_SIZE or adjusting the comment to clarify this is specific to compacted pages and includes the series field.

Copilot uses AI. Check for mistakes.
Comment on lines +1124 to 1128
LogManager.instance().log(this, Level.FINE,
"loadVectorsFromPages DONE: Loaded " + vectorIndex.size() + " vector locations (" + entriesRead
+ " total entries) for index: " + indexName
+ ", nextId=" + nextId.get() + ", fileId=" + getFileId() + ", totalPages=" + getTotalPages() +
(compactedSubIndex != null ?
Copy link

Copilot AI Dec 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This System.out.println statement should be replaced with LogManager.instance().log() for consistency with other logging in the codebase. The LSMVectorIndexCompactor class was already updated to remove similar System.out.println statements in favor of LogManager logging (see lines 85-86, 187-188, 212-227 in the compactor). This log statement provides useful operational information about loaded vector locations and should use proper logging with an appropriate level like Level.INFO or Level.FINE.

Suggested change
LogManager.instance().log(this, Level.FINE,
"loadVectorsFromPages DONE: Loaded " + vectorIndex.size() + " vector locations (" + entriesRead
+ " total entries) for index: " + indexName
+ ", nextId=" + nextId.get() + ", fileId=" + getFileId() + ", totalPages=" + getTotalPages() +
(compactedSubIndex != null ?
LogManager.instance().log(this, Level.INFO,
"loadVectorsFromPages DONE: Loaded %d vector locations (%d total entries) for index: %s, nextId=%d, fileId=%d, totalPages=%d%s",
null,
vectorIndex.size(), entriesRead, indexName, nextId.get(), getFileId(), getTotalPages(),
(compactedSubIndex != null ?
String.format(", compactedFileId=%d, compactedPages=%d", compactedSubIndex.getFileId(), compactedSubIndex.getTotalPages()) :

Copilot uses AI. Check for mistakes.
Comment on lines 1226 to 1227
}
}
Copy link

Copilot AI Dec 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This System.out.println statement should be replaced with LogManager.instance().log() for consistency with other logging in the codebase. The LSMVectorIndexCompactor class was already updated to remove similar System.out.println statements in favor of LogManager logging. This log statement provides useful debug information and should use proper logging with an appropriate level like Level.FINE or Level.INFO.

Copilot uses AI. Check for mistakes.
Comment on lines +178 to +179
// Record the absolute file offset where this entry will be written
final long entryFileOffset = currentFileOffset.get();
Copy link

Copilot AI Dec 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The absolute file offset calculation is incorrect. The currentFileOffset starts at 0 and only tracks bytes written within pages, but it doesn't account for page boundaries, BasePage.PAGE_HEADER_SIZE, or the variable header sizes. When reading entries (e.g., LSMVectorIndex.java:660, 726, 1192), the code calculates offsets as: pageStartOffset + BasePage.PAGE_HEADER_SIZE + currentOffset. However, the currentFileOffset here doesn't account for these components when a new page is created. When switching to a new page (line 168), currentFileOffset should be updated to: pageNum * pageSize + BasePage.PAGE_HEADER_SIZE + headerSize, to reflect the true file position where entries will be written.

Copilot uses AI. Check for mistakes.
@lvca lvca deleted the lsm-vector-ultra-compressed branch December 11, 2025 15:49
@lvca lvca added this to the 25.12.1 milestone Dec 11, 2025
@codacy-production
Copy link

codacy-production bot commented Dec 11, 2025

Coverage summary from Codacy

See diff coverage on Codacy

Coverage variation Diff coverage
-0.22% 83.18%
Coverage variation details
Coverable lines Covered lines Coverage
Common ancestor commit (f37a9ae) 75784 48504 64.00%
Head commit (1243a90) 75778 (-6) 48335 (-169) 63.79% (-0.22%)

Coverage variation is the difference between the coverage for the head and common ancestor commits of the pull request branch: <coverage of head commit> - <coverage of common ancestor commit>

Diff coverage details
Coverable lines Covered lines Diff coverage
Pull request (#2911) 214 178 83.18%

Diff coverage is the percentage of lines that are covered by tests out of the coverable lines that the pull request added or modified: <covered lines added or modified>/<coverable lines added or modified> * 100%

See your quality gate settings    Change summary preferences

mergify bot added a commit to robfrank/linklift that referenced this pull request Jan 9, 2026
….1 [skip ci]

Bumps [com.arcadedb:arcadedb-network](https://github.com/ArcadeData/arcadedb) from 25.11.1 to 25.12.1.
Release notes

*Sourced from [com.arcadedb:arcadedb-network's releases](https://github.com/ArcadeData/arcadedb/releases).*

> 25.12.1
> -------
>
> ArcadeDB 25.12.1 Release Notes
> ==============================
>
> We're excited to announce the release of ArcadeDB v25.12.1! This release includes significant bug fixes, new features, performance improvements, and dependency updates.
>
> Highlights
> ----------
>
> ### Vector Search Enhancements
>
> * **Fixed critical vector quantization bug** ([#3052](https://redirect.github.com/ArcadeData/arcadedb/issues/3052), [#3053](https://redirect.github.com/ArcadeData/arcadedb/issues/3053)) - INT8 and BINARY vector quantization now works correctly across all dimensions
> * **New filtered vector search** ([#3071](https://redirect.github.com/ArcadeData/arcadedb/issues/3071), [#3072](https://redirect.github.com/ArcadeData/arcadedb/issues/3072)) - LSMVectorIndex now supports filtered searches for more precise queries
> * **Better vector type support** ([#3090](https://redirect.github.com/ArcadeData/arcadedb/issues/3090)) - Added support for `List<Float>` in vector indexes
> * **Improved compression** ([#2911](https://redirect.github.com/ArcadeData/arcadedb/issues/2911)) - Enhanced compression for LSM vector indexes
> * **Fixed HNSW graph persistence** ([#2916](https://redirect.github.com/ArcadeData/arcadedb/issues/2916)) - Ensures JVector HNSW graph file is properly closed and flushed to disk
>
> ### SQL and Query Improvements
>
> * **Fixed IF statement execution** ([#2775](https://redirect.github.com/ArcadeData/arcadedb/issues/2775)) - SQL scripts with IF statements now execute correctly from console
> * **Fixed index creation with IF NOT EXISTS** ([#1819](https://redirect.github.com/ArcadeData/arcadedb/issues/1819)) - Console no longer errors when creating existing indexes with IF NOT EXISTS clause
> * **Custom function parameter binding** ([#3046](https://redirect.github.com/ArcadeData/arcadedb/issues/3046), [#3049](https://redirect.github.com/ArcadeData/arcadedb/issues/3049)) - Fixed parameter binding for SQL and JavaScript custom functions
> * **SQL method consistency** ([#2964](https://redirect.github.com/ArcadeData/arcadedb/issues/2964), [#2967](https://redirect.github.com/ArcadeData/arcadedb/issues/2967)) - `values()` method now behaves consistently with `keys()` method
> * **CONTAINSANY index fix** ([#3051](https://redirect.github.com/ArcadeData/arcadedb/issues/3051)) - Fixed index usage for lists of embedded documents with CONTAINSANY
>
> ### Transaction Management
>
> * **Revised transaction logic** ([#3074](https://redirect.github.com/ArcadeData/arcadedb/issues/3074)) - Improved transaction handling and consistency
> * **Fixed edge index invalidation** ([#3091](https://redirect.github.com/ArcadeData/arcadedb/issues/3091)) - Edge indexes now remain valid in edge-case scenarios
>
> ### New Features
>
> * **Database size API** ([#3045](https://redirect.github.com/ArcadeData/arcadedb/issues/3045)) - Added new `database.getSize()` API method
> * **Version display enhancement** ([#2905](https://redirect.github.com/ArcadeData/arcadedb/issues/2905)) - Server log version number now displayed consistently
>
> What's Changed
> --------------
>
> ### Bug Fixes
>
> * Fix INT8 and BINARY vector quantization offset bug in LSMVectorIndex page loading by [`@​Copilot`](https://github.com/Copilot) in [ArcadeData/arcadedb#3053](https://redirect.github.com/ArcadeData/arcadedb/pull/3053)
> * fix: revert SQL grammar changes and disable deep level JSON insert tests by [`@​robfrank`](https://github.com/robfrank) in [ArcadeData/arcadedb#2961](https://redirect.github.com/ArcadeData/arcadedb/pull/2961)
> * [#2915](https://redirect.github.com/ArcadeData/arcadedb/issues/2915) fix: ensure Jvector HNSW graph file is closed and flushed to disk on database close by [`@​robfrank`](https://github.com/robfrank) in [ArcadeData/arcadedb#2916](https://redirect.github.com/ArcadeData/arcadedb/pull/2916)
> * fix: make values method behave like keys method by [`@​gramian`](https://github.com/gramian) in [ArcadeData/arcadedb#2967](https://redirect.github.com/ArcadeData/arcadedb/pull/2967)
> * Fix custom function parameter binding for SQL and JavaScript functions by [`@​Copilot`](https://github.com/Copilot) in [ArcadeData/arcadedb#3049](https://redirect.github.com/ArcadeData/arcadedb/pull/3049)
> * fix CONTAINSANY index use for lists of embedded documents by [`@​gramian`](https://github.com/gramian) in [ArcadeData/arcadedb#3051](https://redirect.github.com/ArcadeData/arcadedb/pull/3051)
> * fix: support List in vector index by [`@​szekelyszabi`](https://github.com/szekelyszabi) in [ArcadeData/arcadedb#3090](https://redirect.github.com/ArcadeData/arcadedb/pull/3090)
>
> ### Features
>
> * Show version number same as in server log by [`@​gramian`](https://github.com/gramian) in [ArcadeData/arcadedb#2905](https://redirect.github.com/ArcadeData/arcadedb/pull/2905)
> * feat: added new `database.getSize()` api by [`@​lvca`](https://github.com/lvca) in [ArcadeData/arcadedb#3045](https://redirect.github.com/ArcadeData/arcadedb/pull/3045)
> * Add filtered vector search support to LSMVectorIndex by [`@​Copilot`](https://github.com/Copilot) in [ArcadeData/arcadedb#3072](https://redirect.github.com/ArcadeData/arcadedb/pull/3072)
> * add stars chart by [`@​robfrank`](https://github.com/robfrank) in [ArcadeData/arcadedb#3084](https://redirect.github.com/ArcadeData/arcadedb/pull/3084)
>
> ### Performance Improvements
>
> * Lsm vector fix by [`@​lvca`](https://github.com/lvca) in [ArcadeData/arcadedb#2907](https://redirect.github.com/ArcadeData/arcadedb/pull/2907)
> * perf: improved compression with lsm vectors by [`@​lvca`](https://github.com/lvca) in [ArcadeData/arcadedb#2911](https://redirect.github.com/ArcadeData/arcadedb/pull/2911)

... (truncated)


Commits

* [`6290454`](ArcadeData/arcadedb@6290454) Set release version to 25.12.1
* [`5bdbdfa`](ArcadeData/arcadedb@5bdbdfa) chore: removed system.out
* [`5764b95`](ArcadeData/arcadedb@5764b95) fix: deletion of light edge after last fix
* [`a81163a`](ArcadeData/arcadedb@a81163a) fix: avoid reuse of deleted record in same tx
* [`a42ae5e`](ArcadeData/arcadedb@a42ae5e) perf: avoid conversion of float[] into List<Float> in SQL engine
* [`c8fb3e5`](ArcadeData/arcadedb@c8fb3e5) chore: refactoring conversion functions to float[] in a centralized method
* [`de9bfcf`](ArcadeData/arcadedb@de9bfcf) fix: support List<Float> in vector index ([#3090](https://redirect.github.com/ArcadeData/arcadedb/issues/3090))
* [`9e964ef`](ArcadeData/arcadedb@9e964ef) Merge branch 'main' of <https://github.com/ArcadeData/arcadedb>
* [`07c7d3e`](ArcadeData/arcadedb@07c7d3e) Fixed failing test using java
* [`51a058b`](ArcadeData/arcadedb@51a058b) fix CONTAINSANY index use for lists of embedded documents ([#3051](https://redirect.github.com/ArcadeData/arcadedb/issues/3051))
* Additional commits viewable in [compare view](ArcadeData/arcadedb@25.11.1...25.12.1)
  
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility\_score?dependency-name=com.arcadedb:arcadedb-network&package-manager=maven&previous-version=25.11.1&new-version=25.12.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
  
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot show  ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
robfrank pushed a commit that referenced this pull request Feb 11, 2026
* perf: improved compression with lsm vectors

each entry is not 17 bytes anymore, but variable. Also got rid of page header, because LSM are immutable only. Pointers are now absolute pointers.

* chore: system out -> log

(cherry picked from commit 5a938f6)
mergify bot added a commit that referenced this pull request Feb 15, 2026
… [skip ci]

Bumps [pdfmake](https://github.com/bpampuch/pdfmake) from 0.3.3 to 0.3.4.
Release notes

*Sourced from [pdfmake's releases](https://github.com/bpampuch/pdfmake/releases).*

> 0.3.4
> -----
>
> * Added vertical alignment for table cells via `verticalAlignment` property, values: `top` (default), `middle`, `bottom`
> * Fixed margin inheritance when styles are extended multiple times


Changelog

*Sourced from [pdfmake's changelog](https://github.com/bpampuch/pdfmake/blob/master/CHANGELOG.md).*

> 0.3.4 - 2026-02-13
> ------------------
>
> * Added vertical alignment for table cells via `verticalAlignment` property, values: `top` (default), `middle`, `bottom`
> * Fixed margin inheritance when styles are extended multiple times


Commits

* [`041a6bb`](bpampuch/pdfmake@041a6bb) 0.3.4
* [`f728226`](bpampuch/pdfmake@f728226) added tests from [#2914](https://redirect.github.com/bpampuch/pdfmake/issues/2914) for StyleContextStack
* [`581c1c6`](bpampuch/pdfmake@581c1c6) GitHub Action for automatic publishing to npm
* [`f04c91d`](bpampuch/pdfmake@f04c91d) remove build folder from git repository
* [`80b9d39`](bpampuch/pdfmake@80b9d39) remove build folder from git repository
* [`a01f55a`](bpampuch/pdfmake@a01f55a) fixed margin inheritance when styles are extended multiple times [#2914](https://redirect.github.com/bpampuch/pdfmake/issues/2914)
* [`3b42096`](bpampuch/pdfmake@3b42096) update dependencies
* [`ea985be`](bpampuch/pdfmake@ea985be) fixed get node height [#74](https://redirect.github.com/bpampuch/pdfmake/issues/74)
* [`2cce666`](bpampuch/pdfmake@2cce666) move Roboto fonts to one place
* [`395e0a8`](bpampuch/pdfmake@395e0a8) fixed view height for vertical alignment with rowspan [#74](https://redirect.github.com/bpampuch/pdfmake/issues/74) [#2911](https://redirect.github.com/bpampuch/pdfmake/issues/2911)
* Additional commits viewable in [compare view](bpampuch/pdfmake@0.3.3...0.3.4)

Maintainer changes

This version was pushed to npm by [GitHub Actions](<https://www.npmjs.com/~GitHub> Actions), a new releaser for pdfmake since your current version.

  
[![Dependabot compatibility score](https://dependabot-badges.githubapp.com/badges/compatibility\_score?dependency-name=pdfmake&package-manager=npm\_and\_yarn&previous-version=0.3.3&new-version=0.3.4)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
Dependabot commands and options
  
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot show  ignore conditions` will show all of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants