Skip to content

BQ encodings#6663

Merged
generall merged 20 commits intodevfrom
bq-encodings
Jun 19, 2025
Merged

BQ encodings#6663
generall merged 20 commits intodevfrom
bq-encodings

Conversation

@IvanPleshkov
Copy link
Contributor

@IvanPleshkov IvanPleshkov commented Jun 10, 2025

This PR introduces new BQ encoding algs: two-bits quantization and one-and-a-half-bits quantization.

Both algs use similal scoring but uses 2 or 1.5 bits per element in vector.

The idea of 2bit BQ is using one more bit to simulate zero value. 1.5bit BQ is a AND operatoion between two zero values.

This PR tested using vector-db-benchmark.
Script to change BQ to 2bit BQ:

curl -X PATCH "http://$QDRANT_HOST/collections/benchmark" \
  -H 'Content-Type: application/json' \
  --data-raw '{
      "quantization_config": {
        "binary": {
          "always_ram": true,
          "encoding": "two_bits"
        }
      }
    }' | jq

Use one_and_half_bits instead of two_bits for 1.5bit quantizaion.

Results for dbpedia 100K with oversampling=1:

BQ:
"mean_precisions": 0.6170399999999999,
"total_time": 5.160307651385665,

1.5bit BQ:
"mean_precisions": 0.69166,
"total_time": 5.693302632775158,

2bit BQ:
"mean_precisions": 0.74374,
"total_time": 5.786725513637066,

Results for laion-small-clip 100K with oversampling=1:

BQ:
"mean_precisions": 0.47686000000000006,
"total_time": 5.935243810061365,

1.5bit BQ:
"mean_precisions": 0.5357,
"total_time": 6.232907399069518,

2bit BQ:
"mean_precisions": 0.59602,
"total_time": 6.318542930763215,

PR already supports GPU, GPU CI: https://github.com/qdrant/qdrant/actions/runs/15581177437

@IvanPleshkov IvanPleshkov marked this pull request as ready for review June 11, 2025 09:35
@IvanPleshkov IvanPleshkov changed the title bq encodings BQ encodings Jun 11, 2025
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jun 11, 2025

Caution

Review failed

The pull request is closed.

📝 Walkthrough

Walkthrough

This change introduces support for multiple binary quantization encoding types throughout the codebase, documentation, and API schemas. A new enum with variants for OneBit, TwoBits, and OneAndHalfBits encodings is added to Rust modules, Protobuf definitions, and OpenAPI schemas. The binary quantization configuration structures and messages are extended with an optional encoding field referencing this enum. Encoding logic is refactored to handle different quantization granularities, including computing and persisting per-dimension vector statistics required for multi-bit encodings. Tests and benchmarks are updated to explicitly specify encoding types, and new integration tests validate the accuracy and behavior of the encoding schemes. Documentation is updated accordingly.

Possibly related PRs

  • BQ encodings #6663: Introduces the same BinaryQuantizationEncoding enum and adds the encoding field to BinaryQuantization messages and related structures, including bidirectional conversions and encoding logic, showing a direct code-level connection.

Suggested labels

chore


📜 Recent review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d4a899a and 60d85be.

📒 Files selected for processing (1)
  • lib/quantization/src/encoded_vectors_binary.rs (11 hunks)
✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🔭 Outside diff range comments (2)
lib/segment/src/vector_storage/tests/custom_query_scorer_equivalency.rs (1)

100-104: ⚠️ Potential issue

Casting the sampled f32 to u8 collapses the distribution to {0,1}

x as u8 saturates negatives to 0 and maps the (0.0,1.0] range to 0‒1, so the generated values are almost exclusively 0 or 1.
That defeats the purpose of providing a balanced [-1.0, 1.0] input for BQ tests and can mask real-world errors.

-        Box::new(
-            rng.sample_iter(rand::distr::Uniform::new_inclusive(-1.0, 1.0).unwrap())
-                .map(|x| f32::from(x as u8)),
-        )
+        Box::new(
+            rng.sample_iter(rand::distr::Uniform::new_inclusive(-1.0, 1.0).unwrap())
+        )
lib/quantization/benches/binary.rs (1)

22-43: ⚠️ Potential issue

VectorParameters.count no longer matches the vector set – potential UB in benchmarks

You create 100 K vectors, then push another 100 K, but keep count at 100 K.
encode() relies on that field for bounds checks and may panic or silently process the wrong slice.

-    let vectors_count = 100_000;
+    let vectors_count = 200_000;           // or remove the second push

@@
-        &VectorParameters {
-            dim: vector_dim,
-            count: vectors_count,
+        &VectorParameters {
+            dim: vector_dim,
+            count: vectors.len() as u32,

Update the same block for the u8 run below.

♻️ Duplicate comments (1)
lib/segment/src/vector_storage/quantized/quantized_vectors.rs (1)

939-950: Same mapping block repeated – DRY violation

This second match block is identical to the one above. Once the conversion helper suggested earlier is in place, replace this entire block with the single-line conversion to avoid errors of omission when variants are extended.

🧹 Nitpick comments (17)
lib/segment/src/vector_storage/tests/custom_query_scorer_equivalency.rs (1)

95-99: Consider exercising the new encodings in this test suite

encoding: None preserves the previous 1-bit default, but the PR adds TwoBits and OneAndHalfBits. Extending the test matrix to cover those variants would protect the new code paths and prevent silent regressions.

lib/segment/tests/integration/multivector_quantization_test.rs (1)

268-270: Parameterise the test over the new encoding enum

Hard-coding encoding: None means the multivector pipeline is never validated for TwoBits or OneAndHalfBits. Adding a #[values(None, Some(OneBit), Some(TwoBits), …)] style matrix (or looping inside the test) will give end-to-end coverage at negligible runtime cost.

lib/api/src/grpc/proto/collections.proto (1)

298-302: Introduce BinaryQuantizationEncoding enum
The new enum defines all supported binary quantization schemes. Consider adding inline comments for each variant to improve readability of the proto.

docs/grpc/docs.md (2)

398-398: Approve encoding field addition
The encoding field for BinaryQuantization is documented with the correct type and anchor. Optionally, you may specify the default encoding value to aid users.


1835-1845: Approve BinaryQuantizationEncoding enum documentation
The new enum section aligns with the existing format. All anchors and markdown syntax are consistent. Consider adding brief descriptions for each enum member to enhance clarity.

lib/segment/src/index/hnsw_index/gpu/gpu_vector_storage/tests.rs (2)

147-176: Consider adding a “legacy / default‐encoding” case

All cases in this block pass an explicit BinaryQuantizationEncoding value.
A quick sanity-check that encoding: None still behaves the same as the original one-bit path is missing. Adding a single case (e.g. case::cosine_f32_default_encoding) that passes None would guard against silent regressions in fallback handling.

This could look like:

 #[case::cosine_f32_default_encoding(
     Distance::Cosine,
     TestStorageType::Dense(TestElementType::Float32),
     273,
     2057,
-    BinaryQuantizationEncoding::OneBit
+    /* encoding = */ { /* special marker – see below */ }
 )]

You would then switch the parameter type of the test to Option<BinaryQuantizationEncoding> and feed it into the config unchanged:

encoding: Option<BinaryQuantizationEncoding>
…
encoding, // instead of Some(encoding)

This keeps coverage high while exercising the “no encoding specified” path.


223-228: Unnecessary clone() of quantization_config

quantization_config is moved only once into test_gpu_vector_storage_impl; cloning here allocates needlessly in every test run.

-    test_gpu_vector_storage_impl(
+    test_gpu_vector_storage_impl(-        Some(quantization_config.clone()),
+        Some(quantization_config),

(A similar clone appears in the SQ/PQ helpers and can be removed in a follow-up sweep.)

lib/segment/tests/integration/byte_storage_quantization_test.rs (1)

293-297: Leverage the struct’s Default to keep the diff minimal

Now that encoding is an Option, the config can be expressed more concisely and keep future additions painless:

-        QuantizationVariant::Binary => BinaryQuantizationConfig {
-            always_ram: None,
-            encoding: None,
-        }
+        QuantizationVariant::Binary => BinaryQuantizationConfig {
+            encoding: None,
+            ..Default::default()
+        }

This avoids repeating fields once defaults evolve.

docs/redoc/master/openapi.json (1)

7032-7039: Enhance BinaryQuantizationEncoding schema with documentation.

Adding a description (and optionally a default) to the enum schema will improve generated API docs and clarify intended defaults:

"BinaryQuantizationEncoding": {
  "type": "string",
  "enum": [
    "one_bit",
    "two_bits",
    "one_and_half_bits"
  ],
+ "description": "Binary quantization encoding options. Defaults to `one_bit` when unset.",
+ "default": "one_bit"
}
lib/segment/src/types.rs (1)

663-677: Consider deriving Copy (and optionally EnumIter) for the new enum

BinaryQuantizationEncoding is a small, C-like enum with no payload.
Deriving Copy lets the compiler move values by bit-copy instead of requiring an explicit clone and reduces boiler-plate at call-sites. Everywhere else in the file we derive Copy for similar enums (Distance, Order, …) so this would keep the API consistent.

Optional quality-of-life: if the variants are ever iterated over (e.g. in tests or CLI validation) adding EnumIter (already in Cargo.toml) avoids handwritten arrays.

-#[derive(Debug, Deserialize, Serialize, JsonSchema, Clone, PartialEq, Eq, Hash, Default)]
+#[derive(Debug, Deserialize, Serialize, JsonSchema, Clone, Copy, PartialEq, Eq, Hash, Default, EnumIter)]

No behaviour changes – only compile-time conveniences.

lib/segment/src/vector_storage/quantized/quantized_vectors.rs (1)

879-890: Duplicate BinaryQuantizationEncoding → Encoding mapping – extract a helper or From impl

The same match block translating BinaryQuantizationEncoding to quantization::encoded_vectors_binary::Encoding appears several times in this file (see also 939-950). Duplicating this logic risks future divergence when new enum variants are added.

-let encoding = match binary_config.encoding {
-    Some(BinaryQuantizationEncoding::OneBit) => Encoding::OneBit,
-    Some(BinaryQuantizationEncoding::TwoBits) => Encoding::TwoBits,
-    Some(BinaryQuantizationEncoding::OneAndHalfBits) => Encoding::OneAndHalfBits,
-    None => Encoding::OneBit,
-};
+let encoding = binary_config
+    .encoding
+    .unwrap_or(BinaryQuantizationEncoding::OneBit)
+    .into();               // requires `impl From<BinaryQuantizationEncoding> for Encoding`

Implementing From (or TryFrom for potential fall-backs) in encoded_vectors_binary.rs removes the boilerplate here and elsewhere, improving maintainability.

lib/quantization/tests/integration/test_binary_encodings.rs (1)

110-118: Test assertion may produce false negatives

The test assumes monotonically non-decreasing accuracy from OneBit1.5Bits2Bits. For some random seeds that assumption can be violated by statistical noise even when encodings are correct, causing flaky tests.

Consider comparing each encoding against a fixed threshold or running multiple seeds and averaging instead of asserting strict ordering.

lib/api/src/grpc/conversions.rs (1)

1056-1059: Consider providing more context in the error message.

While the error handling is correct, consider including the invalid encoding value in the error message for better debugging.

-            .map_err(|_| Status::invalid_argument("Unknown binary quantization encoding"))?;
+            .map_err(|_| Status::invalid_argument(format!("Unknown binary quantization encoding: {:?}", encoding)))?;
lib/quantization/src/encoded_vectors_binary.rs (4)

269-271: Document the standard deviation fallback behavior.

The fallback to 1.0 for near-zero standard deviations is reasonable but should be documented for clarity.

Add a comment explaining this behavior:

         sds.iter_mut()
-            .for_each(|sd| *sd = if *sd < f32::EPSILON { 1.0 } else { sd.sqrt() });
+            .for_each(|sd| {
+                // Use 1.0 as fallback for near-zero variance to avoid division by zero in normalization
+                *sd = if *sd < f32::EPSILON { 1.0 } else { sd.sqrt() }
+            });

330-333: Document the hardcoded normalization parameters.

The z-score normalization uses hardcoded values that should be documented for clarity.

Consider extracting these as named constants or adding explanatory comments:

             let mean = means[i];
             let sd = standard_deviations[i];
-            let ranges = 3;
-            let v_z = (v - mean) / sd;
-            let index = (v_z + 2.0) / (4.0 / ranges as f32);
+            // Map z-scores from [-2, 2] range to [0, 3] for 3 quantization levels
+            const RANGES: f32 = 3.0;
+            const Z_OFFSET: f32 = 2.0;  // Maps z=-2 to index=0
+            const Z_RANGE: f32 = 4.0;   // Total z-score range covered
+            let v_z = (v - mean) / sd;
+            let index = (v_z + Z_OFFSET) / (Z_RANGE / RANGES);

345-371: Add documentation for the complex bit packing strategy.

The one-and-a-half bits encoding uses a non-trivial bit packing strategy that needs clear documentation.

Add a comment explaining the bit layout:

     fn encode_one_and_half_bits_vector(
         vector: &[f32],
         encoded_vector: &mut [TBitsStoreType],
         standard_deviations: &[f32],
         means: &[f32],
     ) {
+        // Bit packing strategy for 1.5 bits per dimension:
+        // - First n bits: one bit per dimension (indicates if value > first threshold)
+        // - Next n/2 bits: one bit per two dimensions (indicates if value > second threshold)
+        // This gives us 3 possible states per dimension using 1.5 bits on average
         let bits_count = u8::BITS as usize * std::mem::size_of::<TBitsStoreType>();

138-138: Use u8::BITS consistently for bit calculations.

For consistency with line 181, consider using u8::BITS here as well.

-        let bits_count = u8::BITS as usize * bytes_count;
+        let bits_count = u8::BITS as usize * bytes_count;

Wait, this line is already using u8::BITS. However, the logic above could be simplified:

     fn get_storage_size(size: usize) -> usize {
-        let bytes_count = if size > 128 {
-            std::mem::size_of::<u128>()
-        } else if size > 64 {
-            std::mem::size_of::<u64>()
-        } else if size > 32 {
-            std::mem::size_of::<u32>()
-        } else {
-            std::mem::size_of::<u8>()
-        };
-
-        let bits_count = u8::BITS as usize * bytes_count;
+        let (bytes_count, bits_count) = if size > 128 {
+            (std::mem::size_of::<u128>(), u128::BITS as usize)
+        } else if size > 64 {
+            (std::mem::size_of::<u64>(), u64::BITS as usize)
+        } else if size > 32 {
+            (std::mem::size_of::<u32>(), u32::BITS as usize)
+        } else {
+            (std::mem::size_of::<u8>(), u8::BITS as usize)
+        };
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e49676f and a97f200.

📒 Files selected for processing (17)
  • docs/grpc/docs.md (3 hunks)
  • docs/redoc/master/openapi.json (1 hunks)
  • lib/api/src/grpc/conversions.rs (2 hunks)
  • lib/api/src/grpc/proto/collections.proto (1 hunks)
  • lib/api/src/grpc/qdrant.rs (2 hunks)
  • lib/quantization/benches/binary.rs (3 hunks)
  • lib/quantization/src/encoded_vectors_binary.rs (7 hunks)
  • lib/quantization/tests/integration/empty_storage.rs (1 hunks)
  • lib/quantization/tests/integration/main.rs (1 hunks)
  • lib/quantization/tests/integration/test_binary.rs (13 hunks)
  • lib/quantization/tests/integration/test_binary_encodings.rs (1 hunks)
  • lib/segment/src/index/hnsw_index/gpu/gpu_vector_storage/tests.rs (3 hunks)
  • lib/segment/src/types.rs (1 hunks)
  • lib/segment/src/vector_storage/quantized/quantized_vectors.rs (5 hunks)
  • lib/segment/src/vector_storage/tests/custom_query_scorer_equivalency.rs (1 hunks)
  • lib/segment/tests/integration/byte_storage_quantization_test.rs (1 hunks)
  • lib/segment/tests/integration/multivector_quantization_test.rs (1 hunks)
🧰 Additional context used
🪛 markdownlint-cli2 (0.17.2)
docs/grpc/docs.md

95-95: Unordered list indentation
Expected: 2; Actual: 4

(MD007, ul-indent)

⏰ Context from checks skipped due to timeout of 90000ms (1)
  • GitHub Check: rust-gpu-tests (ubuntu-latest)
🔇 Additional comments (16)
lib/quantization/benches/binary.rs (1)

7-8: Import looks good
The explicit Encoding import clarifies intent and keeps the benchmark compiling with the new API.

lib/quantization/tests/integration/empty_storage.rs (1)

104-105: LGTM – explicit encoding keeps the test future-proof

lib/quantization/tests/integration/main.rs (1)

12-14: Good addition of the new test module

Including test_binary_encodings in the integration harness ensures the new variants are executed in CI.

lib/quantization/tests/integration/test_binary.rs (2)

7-7: Added new Encoding import
The import of Encoding alongside BitsStoreType and EncodedVectorsBin is correct and necessary to specify encoding schemes in the tests.


52-53: Specified Encoding::OneBit for all .encode calls
Each call to EncodedVectorsBin::encode now includes the new encoding parameter set to OneBit, preserving the original test behavior for one-bit quantization.

Also applies to: 99-100, 146-147, 190-191, 233-234, 295-296, 357-358, 417-418, 476-477, 538-539, 599-600, 658-659

lib/api/src/grpc/proto/collections.proto (1)

306-307: Add optional encoding to BinaryQuantization
The encoding field is correctly declared as optional. Ensure that server‐ and client‐side code default to a sensible variant (e.g., OneBit) when this field is omitted.

docs/grpc/docs.md (1)

95-95: Approve Table of Contents update
The new BinaryQuantizationEncoding entry is correctly placed under collections.proto and the anchor matches the section below.

🧰 Tools
🪛 markdownlint-cli2 (0.17.2)

95-95: Unordered list indentation
Expected: 2; Actual: 4

(MD007, ul-indent)

lib/segment/src/types.rs (1)

683-686: 👍 Back-compat preserved via #[serde(default)]

The new optional encoding field is gated by #[serde(default)] and skip_serializing_if, so existing persisted configs deserialize cleanly and responses remain unchanged when the field is not set. Good attention to compatibility.

lib/api/src/grpc/qdrant.rs (1)

433-435:

Details

❌ Incorrect review comment

Consider using the enum type directly instead of i32 for stronger type–safety

prost supports using the generated enum as the field type (Option<BinaryQuantizationEncoding>) instead of the raw i32. Storing it as an i32 pushes the burden of validity checking onto every call-site and increases the chance of accidentally accepting an out-of-range value before it reaches the conversion layer.

-#[prost(enumeration = "BinaryQuantizationEncoding", optional, tag = "2")]
-pub encoding: ::core::option::Option<i32>,
+#[prost(enumeration = "BinaryQuantizationEncoding", optional, tag = "2")]
+pub encoding: ::core::option::Option<BinaryQuantizationEncoding>,

This keeps the wire format identical (still var-int field #2) while giving compile-time guarantees inside Rust code.


🏁 Script executed:

#!/bin/bash
# Display the first 30 lines of the file to check for generated code markers
head -n 30 lib/api/src/grpc/qdrant.rs

Length of output: 1467


Ignore manual edits in generated code – adjust the .proto instead

This file (lib/api/src/grpc/qdrant.rs) is fully generated by prost-build and will be overwritten on each build. You should not change the field’s Rust type here. If you want to expose Option<BinaryQuantizationEncoding> instead of Option<i32>, update the corresponding .proto definition or prost-build configuration so that the generated code uses the enum type directly.

Likely an incorrect or invalid review comment.

lib/quantization/tests/integration/test_binary_encodings.rs (1)

12-14: ⚠️ Potential issue

Compilation fails – use gen_range instead of non-existent random_range

rand::Rng provides gen_range, not random_range. The current code will not compile.

-fn generate_number(rng: &mut rand::rngs::StdRng) -> f32 {
-    rng.random_range(-1.0..1.0)
+fn generate_number(rng: &mut rand::rngs::StdRng) -> f32 {
+    rng.gen_range(-1.0..1.0)
 }
⛔ Skipped due to learnings
Learnt from: coszio
PR: qdrant/qdrant#6528
File: lib/posting_list/src/tests.rs:44-47
Timestamp: 2025-05-15T22:54:30.292Z
Learning: The rand crate version 0.9.0 and higher changed method names from `gen_*` to `random_*` (e.g., `gen_range()` became `random_range()`). Code using rand 0.9.x should use the `random_*` method names, while code using rand 0.8.x and earlier should use the `gen_*` method names.
Learnt from: coszio
PR: qdrant/qdrant#6528
File: lib/posting_list/src/tests.rs:44-47
Timestamp: 2025-05-15T22:54:30.292Z
Learning: The rand crate version 0.9.0 and newer uses method names with `random_*` prefix (like `random_range`), while versions 0.8.x and older use `gen_*` prefix (like `gen_range`). This was part of an API redesign in rand 0.9.0 released in February 2024.
Learnt from: coszio
PR: qdrant/qdrant#6528
File: lib/posting_list/src/tests.rs:44-47
Timestamp: 2025-05-15T22:54:30.292Z
Learning: The rand crate version 0.9.0 and newer uses method names with `random_*` prefix (like `random_range`), while versions 0.8.x and older use `gen_*` prefix (like `gen_range`). This naming change was introduced in rand 0.9.0-alpha.1.
Learnt from: coszio
PR: qdrant/qdrant#6446
File: lib/gridstore/benches/flush_bench.rs:18-18
Timestamp: 2025-04-29T16:48:34.967Z
Learning: The Rust rand crate version 0.9+ has introduced a top-level `rng()` function that replaces the now-deprecated `thread_rng()` function.
lib/api/src/grpc/conversions.rs (3)

1001-1015: LGTM! Clean bidirectional conversion implementation.

The conversion from internal to gRPC encoding types is well-structured with explicit matching of all enum variants.


1017-1031: LGTM! Symmetric conversion implementation.

The reverse conversion maintains consistency with the forward conversion.


1033-1046: LGTM! Properly handles optional encoding field.

The implementation correctly extracts and converts the encoding field when present, maintaining backward compatibility with the optional field.

lib/quantization/src/encoded_vectors_binary.rs (3)

22-35: LGTM! Well-structured enum with proper serialization support.

The Encoding enum is properly designed with a sensible default and a helper method for conditional serialization.


41-52: LGTM! Excellent backward compatibility design.

The metadata structure properly handles new fields with conditional serialization, ensuring backward compatibility when encoding is OneBit and statistics are empty.


377-377: LGTM! Correct use of div_ceil for 1.5 bits encoding.

The ceiling division correctly handles the fractional bits case for the one-and-a-half bits encoding.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (5)
lib/quantization/src/encoded_vectors_binary.rs (5)

22-28: Add an explicit representation to avoid ABI / forward-compat pitfalls

serde guarantees deterministic (de)serialisation of enums, but the underlying discriminant is not stabilised across compilers/platforms.
Declaring the enum as #[repr(u8)] locks the layout and makes FFI (and potential on-disk bincode / C-bindings) safer at virtually zero cost.

-#[derive(Clone, Copy, Eq, PartialEq, Debug, Serialize, Deserialize, Default)]
+#[repr(u8)]
+#[derive(Clone, Copy, Eq, PartialEq, Debug, Serialize, Deserialize, Default)]
 pub enum Encoding {

201-209: Avoid unnecessary mean/σ computations for OneBit encoding

Mean and standard-deviation vectors are never used by encode_one_bit_vector, yet they are computed and persisted for every collection.
That is O(N·dim) extra work and extra disk I/O.

-        let means = Self::means(orig_data.clone(), count);
-        let standard_deviations = Self::standard_deviations(orig_data.clone(), &means, count);
+        let (means, standard_deviations) = if encoding == Encoding::OneBit {
+            (Vec::new(), Vec::new())
+        } else {
+            let m = Self::means(orig_data.clone(), count);
+            let sd = Self::standard_deviations(orig_data.clone(), &m, count);
+            (m, sd)
+        };

You can then pass empty slices to encode_vector for the OneBit path.
Reduces encode time and shrinks metadata considerably.


234-250: Three passes over the dataset – consider streaming mean/variance

means and standard_deviations walk the entire iterator twice after the initial count() pass (total = 3 × N).
A single streaming Welford’s algorithm can compute both mean and variance in one pass, cutting memory traffic by ~66 %.

Not critical for small collections, but noticeable on multi-GB datasets.

Also applies to: 262-272


319-333: Hard-coded ranges = 3 deserves a named constant / comment

Two-bit encoding currently splits the Z-score interval [-2,2] into three buckets, mapping to 0/1/2 set bits.
Readers unfamiliar with the academic paper will wonder why “3” and why “±2”. A short comment or a const BINS: usize = 3 would clarify intent and ease future tuning.


180-187: Minor semantic inconsistency in get_storage_size return units

For u8 the function returns bytes, for u128 it returns elements (converted to bytes later).
The wrapper currently compensates, but aligning semantics (either always bytes or always elements) would make future refactors safer.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a97f200 and f9b7d01.

📒 Files selected for processing (1)
  • lib/quantization/src/encoded_vectors_binary.rs (7 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (13)
  • GitHub Check: test-shard-snapshot-api-s3-minio
  • GitHub Check: test-consistency
  • GitHub Check: test-low-resources
  • GitHub Check: integration-tests-consensus
  • GitHub Check: integration-tests
  • GitHub Check: test-consensus-compose
  • GitHub Check: Basic TLS/HTTPS tests
  • GitHub Check: test-snapshot-operations-s3-minio
  • GitHub Check: rust-tests (windows-latest)
  • GitHub Check: lint
  • GitHub Check: rust-tests (macos-latest)
  • GitHub Check: rust-tests (ubuntu-latest)
  • GitHub Check: storage-compat-test
🔇 Additional comments (3)
lib/quantization/src/encoded_vectors_binary.rs (3)

43-52: Good use of serde(default) / conditional serialisation

Old metadata files will seamlessly deserialise with the default OneBit variant, and the skip_serializing_if keeps JSON compact for that case.
Looks correct and future-proof.


307-316: Implicit assumption on input range – surface it

encode_one_bit_vector relies on all components being already normalised to [-1, 1].
If the caller forgets to scale, every positive value becomes 1 and every non-positive 0, destroying distance ordering.

Consider:

  • documenting this loudly (/// expects values in [-1,1])
  • or guarding with debug_assert!(v.abs() <= 1.0 + f32::EPSILON);

347-373: Verify bit collision strategy in 1½-bit encoding

j = vector.len() + i / 2; means dimensions (0,1) share the same “second” bit, as do (2,3), etc.
This is presumably the intended 3-bits-per-2-dims packing, but it compresses information by OR-ing contributions from two dimensions into one bit position.

Please double-check that the downstream distance metric was derived with this collision in mind; otherwise precision might degrade unexpectedly.

Comment on lines +206 to +208
let count = orig_data.clone().count();
let means = Self::means(orig_data.clone(), count);
let standard_deviations = Self::standard_deviations(orig_data.clone(), &means, count);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We read vectors 3 times to compute stats, this might be expensive, especially if vectors are on disk.
Let's do streaming stat computing from the single read + ideally only if the method actually requires it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ChatGPT have some examples for online stats computation https://chatgpt.com/share/6850a226-2898-8002-8bb1-84dc61382c5c

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. Fun fact, CodeRabbit did the same suggestion in nitpicks:

234-250: Three passes over the dataset – consider streaming mean/variance

means and standard_deviations walk the entire iterator twice after the initial count() pass (total = 3 × N).
A single streaming Welford’s algorithm can compute both mean and variance in one pass, cutting memory traffic by ~66 %.

Not critical for small collections, but noticeable on multi-GB datasets.

fn encode_vector(vector: &[f32]) -> EncodedBinVector<TBitsStoreType> {
let mut encoded_vector =
vec![Default::default(); TBitsStoreType::get_storage_size(vector.len())];
fn means<'a>(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's do struct for streaming update of statistics and move it into a dedicated file.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done, struct is called VectorStats in a separate file.

orig_data: impl Iterator<Item = impl AsRef<[f32]> + 'a> + Clone,
count: usize,
) -> Vec<f32> {
let dimension = orig_data.clone().next().map_or(0, |v| v.as_ref().len());
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefer to read it from vector parameters.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done, thanks! It avoids unnecessary reads

}
}
sds.iter_mut()
.for_each(|sd| *sd = if *sd < f32::EPSILON { 1.0 } else { sd.sqrt() });
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if sd == 0, why result is 1.0?

Copy link
Contributor Author

@IvanPleshkov IvanPleshkov Jun 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It was a zero-div check. I refactored this part. see fn encode_two_bits_value(value: f32, mean: f32, sd: f32). I did an explicit sd value check and fallback into 1-bit encoding if, for some reason, sd is zero

Comment on lines +331 to +332
let v_z = (v - mean) / sd;
let index = (v_z + 2.0) / (4.0 / ranges as f32);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this part requires a comment. Where 2.0 and 4.0 is coming from?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did a refactor, see fn encode_two_bits_value(value: f32, mean: f32, sd: f32). It has comments for each step and explanes all magic numbers

Comment on lines +359 to +332
let v_z = (v - mean) / sd;
let index = (v_z + 2.0) / (4.0 / ranges as f32);

if index >= 1.0 {
let count_ones = (index.floor() as usize).min(2);
if count_ones > 1 {
encoded_vector[i / bits_count] |= one << (i % bits_count);
}
if count_ones > 0 {
let j = vector.len() + i / 2;
encoded_vector[j / bits_count] |= one << (j % bits_count);
}
}
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Requires commend with examples

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done, I covered all logic by comments, see fn encode_two_bits_value(value: f32, mean: f32, sd: f32). For 1.5bit quantization I added a motivation and examples how do we unite two 2bit values into 3bits

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🔭 Outside diff range comments (1)
lib/quantization/src/encoded_vectors_binary.rs (1)

459-465: Errors from create_dir_all silently ignored

parent().map(std::fs::create_dir_all); discards both the Option and the Result, so failures (e.g. permission denied) propagate as “file not found” later, making debugging painful.

-if let Some(dir) = meta_path.parent() {
-    std::fs::create_dir_all(dir)?;
-}
+if let Some(dir) = meta_path.parent() {
+    std::fs::create_dir_all(dir)?;
+}

Repeat for the data path and vector_stats_path.

🧹 Nitpick comments (1)
lib/quantization/src/vector_stats.rs (1)

18-27: Use f32::INFINITY / NEG_INFINITY for clearer intent

f32::MAX / f32::MIN are technically correct, but INFINITY / NEG_INFINITY convey intent more explicitly and avoid the cognitive overhead of remembering which of the two M-prefixed constants is the “large positive” one.

-            min: f32::MAX,
-            max: f32::MIN,
+            min: f32::INFINITY,
+            max: f32::NEG_INFINITY,
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3105418 and 5ff2592.

📒 Files selected for processing (4)
  • lib/quantization/src/encoded_vectors_binary.rs (11 hunks)
  • lib/quantization/src/lib.rs (1 hunks)
  • lib/quantization/src/vector_stats.rs (1 hunks)
  • lib/quantization/tests/integration/test_binary_encodings.rs (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • lib/quantization/src/lib.rs
🚧 Files skipped from review as they are similar to previous changes (1)
  • lib/quantization/tests/integration/test_binary_encodings.rs
⏰ Context from checks skipped due to timeout of 90000ms (14)
  • GitHub Check: lint
  • GitHub Check: storage-compat-test
  • GitHub Check: test-snapshot-operations-s3-minio
  • GitHub Check: rust-tests (ubuntu-latest)
  • GitHub Check: test-consistency
  • GitHub Check: rust-tests (windows-latest)
  • GitHub Check: integration-tests
  • GitHub Check: rust-tests (macos-latest)
  • GitHub Check: test-low-resources
  • GitHub Check: test-shard-snapshot-api-s3-minio
  • GitHub Check: integration-tests-consensus
  • GitHub Check: rust-tests-no-rocksdb (ubuntu-latest)
  • GitHub Check: test-consensus-compose
  • GitHub Check: Basic TLS/HTTPS tests
🔇 Additional comments (3)
lib/quantization/src/vector_stats.rs (1)

30-43: Mismatch risk between vector_params.count and iterator length

The online update uses count = i + 1, but the final variance divisor later relies on vector_params.count. If callers pass an incorrect count, standard deviation will be off (or divide-by-zero if it is 0/1 while the iterator contained many vectors).

Either:

  1. Assert the equality once the loop is finished.
  2. Pass the real counter down instead of relying on the config field.
 // after the loop
-debug_assert!(vector_params.count == (i + 1), "VectorParameters.count does not match actual data length");
+assert_eq!(
+vector_params.count,
+data_points_seen,
+"VectorParameters.count={:?} but actually processed {:?} vectors",
+vector_params.count,
+data_points_seen
+);
lib/quantization/src/encoded_vectors_binary.rs (2)

382-389: usize::div_ceil requires Rust 1.71 – confirm MSRV

div_ceil stabilized in 1.71. If the project’s MSRV is lower, this will break the build for downstream users. Consider a fallback helper:

fn ceil_div(a: usize, b: usize) -> usize {
    (a + b - 1) / b
}

or bump MSRV explicitly.


298-337: Confirm packing logic for 1.5-bit encoding

The second bit is written to j = dim + i/2, effectively OR-ing the high bits of pairs (0,1), (2,3) … This matches the doc-comment, but means the last vector element in an odd-dimensional vector never shares its bit (good) yet the storage size uses ceil(dim*1.5), so there is still room.

No action required, just flagging to double-check that retrieval / scoring code applies the identical mapping; otherwise similarity scores will drift for odd dimensions.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
lib/quantization/src/encoded_vectors_binary.rs (1)

472-474: Fix misuse of std::fs::exists – it returns bool, not Result

std::fs::exists already yields a boolean. Pattern-matching on Ok(true) will not compile and re-introduces the compile error called out in a prior review.
Replace the check with the idiomatic .exists() call on the Path.

-        } else if let Ok(true) = std::fs::exists(&vector_stats_path) {
-            std::fs::remove_file(&vector_stats_path)?;
+        } else if vector_stats_path.exists() {
+            std::fs::remove_file(&vector_stats_path)?;
         }
🧹 Nitpick comments (1)
lib/quantization/src/encoded_vectors_binary.rs (1)

202-208: Avoid double pass over orig_data to build stats

VectorStats::build consumes a full clone of orig_data, and the subsequent
encode loop consumes the original iterator – two complete reads.
Streaming the statistics while encoding (or re-using the same iterator with
peekable buffering) would cut I/O by half for large on-disk collections.

Consider refactoring VectorStats::build into an online updater that is fed
inside the main loop.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5ff2592 and 47a7561.

📒 Files selected for processing (1)
  • lib/quantization/src/encoded_vectors_binary.rs (11 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (13)
  • GitHub Check: lint
  • GitHub Check: test-snapshot-operations-s3-minio
  • GitHub Check: test-consistency
  • GitHub Check: test-shard-snapshot-api-s3-minio
  • GitHub Check: test-low-resources
  • GitHub Check: test-consensus-compose
  • GitHub Check: Basic TLS/HTTPS tests
  • GitHub Check: integration-tests
  • GitHub Check: integration-tests-consensus
  • GitHub Check: rust-tests (macos-latest)
  • GitHub Check: rust-tests (windows-latest)
  • GitHub Check: rust-tests (ubuntu-latest)
  • GitHub Check: storage-compat-test
🔇 Additional comments (1)
lib/quantization/src/encoded_vectors_binary.rs (1)

378-385: Confirm MSRV for .div_ceil()

usize::div_ceil is stable only from Rust 1.73.
If the project’s MSRV is lower, this call will fail to compile. Please verify
toolchain compatibility or add a polyfill.

@IvanPleshkov IvanPleshkov requested a review from generall June 18, 2025 14:09
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
lib/quantization/src/encoded_vectors_binary.rs (1)

463-476: Critical: Fix std::fs::exists usage

The code on line 473 treats std::fs::exists as returning a Result, but it returns a plain bool. This will cause a compilation error.

Apply this fix:

-        } else if let Ok(true) = std::fs::exists(&vector_stats_path) {
+        } else if vector_stats_path.exists() {
🧹 Nitpick comments (2)
docs/grpc/docs.md (2)

96-96: Validate TOC indentation
The new nested list item is indented by 4 spaces, which violates markdownlint MD007 (expected 2 spaces for nested lists).

Apply this diff to normalize the indent:

-    - [BinaryQuantizationEncoding](#qdrant-BinaryQuantizationEncoding)
+  - [BinaryQuantizationEncoding](#qdrant-BinaryQuantizationEncoding)

1853-1863: Enhance enum variant descriptions
The BinaryQuantizationEncoding table omits descriptions for each variant, reducing clarity. Consider adding concise descriptions for each entry.

For example:

| Name              | Number | Description                     |
| ----------------- | ------ | ------------------------------- |
| OneBit            | 0      | Single-bit binary quantization  |
| TwoBits           | 1      | Two-bit binary quantization     |
| OneAndHalfBits    | 2      | One-and-a-half bit quantization |
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5d8db64 and d4a899a.

📒 Files selected for processing (19)
  • docs/grpc/docs.md (3 hunks)
  • docs/redoc/master/openapi.json (1 hunks)
  • lib/api/src/grpc/conversions.rs (2 hunks)
  • lib/api/src/grpc/proto/collections.proto (1 hunks)
  • lib/api/src/grpc/qdrant.rs (2 hunks)
  • lib/quantization/benches/binary.rs (3 hunks)
  • lib/quantization/src/encoded_vectors_binary.rs (11 hunks)
  • lib/quantization/src/lib.rs (1 hunks)
  • lib/quantization/src/vector_stats.rs (1 hunks)
  • lib/quantization/tests/integration/empty_storage.rs (1 hunks)
  • lib/quantization/tests/integration/main.rs (1 hunks)
  • lib/quantization/tests/integration/test_binary.rs (13 hunks)
  • lib/quantization/tests/integration/test_binary_encodings.rs (1 hunks)
  • lib/segment/src/index/hnsw_index/gpu/gpu_vector_storage/tests.rs (3 hunks)
  • lib/segment/src/types.rs (1 hunks)
  • lib/segment/src/vector_storage/quantized/quantized_vectors.rs (5 hunks)
  • lib/segment/src/vector_storage/tests/custom_query_scorer_equivalency.rs (1 hunks)
  • lib/segment/tests/integration/byte_storage_quantization_test.rs (1 hunks)
  • lib/segment/tests/integration/multivector_quantization_test.rs (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (17)
  • lib/quantization/src/lib.rs
  • lib/segment/src/vector_storage/tests/custom_query_scorer_equivalency.rs
  • lib/quantization/tests/integration/empty_storage.rs
  • lib/quantization/tests/integration/main.rs
  • lib/quantization/benches/binary.rs
  • lib/segment/tests/integration/multivector_quantization_test.rs
  • lib/quantization/tests/integration/test_binary.rs
  • lib/segment/tests/integration/byte_storage_quantization_test.rs
  • lib/segment/src/index/hnsw_index/gpu/gpu_vector_storage/tests.rs
  • lib/api/src/grpc/proto/collections.proto
  • lib/api/src/grpc/qdrant.rs
  • lib/quantization/tests/integration/test_binary_encodings.rs
  • lib/segment/src/types.rs
  • lib/api/src/grpc/conversions.rs
  • lib/quantization/src/vector_stats.rs
  • lib/segment/src/vector_storage/quantized/quantized_vectors.rs
  • docs/redoc/master/openapi.json
🧰 Additional context used
🪛 markdownlint-cli2 (0.17.2)
docs/grpc/docs.md

96-96: Unordered list indentation
Expected: 2; Actual: 4

(MD007, ul-indent)

⏰ Context from checks skipped due to timeout of 90000ms (12)
  • GitHub Check: rust-tests (windows-latest)
  • GitHub Check: rust-tests (ubuntu-latest)
  • GitHub Check: rust-tests-no-rocksdb (ubuntu-latest)
  • GitHub Check: rust-tests (macos-latest)
  • GitHub Check: test-snapshot-operations-s3-minio
  • GitHub Check: test-low-resources
  • GitHub Check: test-shard-snapshot-api-s3-minio
  • GitHub Check: test-consistency
  • GitHub Check: lint
  • GitHub Check: integration-tests
  • GitHub Check: integration-tests-consensus
  • GitHub Check: storage-compat-test
🔇 Additional comments (14)
docs/grpc/docs.md (1)

398-400: Confirm encoding field docs consistency
The encoding line correctly references the new enum anchor and provides an appropriate description. Ensure it stays in sync with the protobuf definition.

lib/quantization/src/encoded_vectors_binary.rs (13)

2-2: LGTM! Clean integration of new dependencies and data structures.

The Path/PathBuf imports and VectorStats integration are well-placed to support the multi-bit encoding functionality.

Also applies to: 11-11, 20-20


24-36: Well-designed enum with appropriate defaults.

The Encoding enum cleanly represents the different quantization schemes with sensible defaults and helper methods.


45-47: Excellent use of serde attributes for backward compatibility.

The conditional serialization ensures the encoding field is only stored when it differs from the default OneBit encoding.


177-177: Good improvement using named constant over magic number.

Using u8::BITS is more self-documenting than the hardcoded 8.


197-197: Smart conditional computation of vector statistics.

Computing statistics only for multi-bit encodings avoids unnecessary overhead for OneBit encoding.

Also applies to: 202-208


231-252: Clean dispatch pattern for different encoding schemes.

The match-based dispatch to specialized encoding functions promotes good code organization and extensibility.


266-293: Robust implementation with defensive fallback.

The fallback to one-bit encoding when vector stats are unavailable prevents runtime errors while maintaining functionality.


295-333: Excellent documentation with clear examples.

The detailed comments and examples make the complex 1.5-bit encoding logic easy to understand and verify.


335-376: Mathematically sound quantization with excellent documentation.

The z-score based approach is well-implemented with clear explanations of the bit encoding scheme and proper handling of edge cases.


378-393: Correct size calculations for different encoding schemes.

The dimension multipliers and rounding logic are appropriate for each encoding type, with proper use of div_ceil for fractional bit counts.


444-448: Sensible path construction for vector statistics file.

Placing the vector stats file alongside the metadata file is logical and the Option return type handles path construction failures appropriately.


487-507: Well-structured conditional loading of vector statistics.

The loading logic appropriately mirrors the save behavior and handles missing vector stats files correctly based on encoding type.


523-523: Critical consistency: Query encoding matches stored vector encoding.

Using the stored encoding and vector stats ensures queries are encoded consistently with the indexed vectors.

@generall generall merged commit fe5becd into dev Jun 19, 2025
15 checks passed
@generall generall deleted the bq-encodings branch June 19, 2025 16:21
@generall generall added this to the 2bit Qunatization milestone Jul 17, 2025
generall pushed a commit that referenced this pull request Jul 17, 2025
* bq encodings

* are you happy clippy

* are you happy clippy

* are you happy clippy

* are you happy clippy

* gpu tests

* update models

* are you happy fmt

* move additional bits to the end

* fix tests

* Welford's Algorithm

* review remarks

* are you happy clippy

* remove debug println in test

* coderabit nitpicks

* remove unnecessary clone and partialeq

* Use f64 for Welford's Algorithm

* try fix ci

* revert cargo-nextest

* add debug assertions
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants