Skip to content

Add unindexed vectors to telemetry and metrics API#7307

Merged
timvisee merged 6 commits intodevfrom
metrics-unindexed-vectors
Oct 29, 2025
Merged

Add unindexed vectors to telemetry and metrics API#7307
timvisee merged 6 commits intodevfrom
metrics-unindexed-vectors

Conversation

@JojiiOfficial
Copy link
Contributor

@JojiiOfficial JojiiOfficial commented Sep 25, 2025

Depends on #7302

Adds information how many vectors will be excluded from requests with indexed_only enabled.

# HELP collection_index_only_excluded_points amount of points excluded in indexed_only requests
# TYPE collection_index_only_excluded_points gauge
collection_index_only_excluded_points{id="benchmark",vector=""} 89500

@JojiiOfficial JojiiOfficial force-pushed the metrics-unindexed-vectors branch from 3a0d050 to dd7d9c9 Compare September 25, 2025 09:35
@JojiiOfficial JojiiOfficial marked this pull request as ready for review September 25, 2025 09:37
Copy link
Member

@generall generall left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this metric is intended to cover

Number of points (per collection) which will be excluded from search if indexed_only is used

then it is not computed correctly.

Segment is only excluded from search with indexed_only if is't size is bigger than

indexing_threshold_kb.max(full_scan_threshold_kb),

@JojiiOfficial JojiiOfficial force-pushed the metrics-points-per-collection branch from 9f27753 to 3d94324 Compare September 26, 2025 08:37
@JojiiOfficial JojiiOfficial force-pushed the metrics-unindexed-vectors branch from 892735f to f4288d7 Compare September 26, 2025 15:39

if !indexed_only_excluded.is_empty() {
metrics.push(metric_family(
"collection_index_only_excluded",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Following open metrics convention, shouldn't this be called something like?

collection_index_only_excluded_vectors_total

Where we have:

  • 'vectors' - as a unit for the count
  • 'total' - to emphasize this is an aggregated count

@JojiiOfficial JojiiOfficial force-pushed the metrics-points-per-collection branch from 3d94324 to 39f8520 Compare October 22, 2025 14:15
@JojiiOfficial JojiiOfficial force-pushed the metrics-unindexed-vectors branch from 51f9bf3 to c97e534 Compare October 22, 2025 14:23
@JojiiOfficial JojiiOfficial force-pushed the metrics-points-per-collection branch from 39f8520 to 086d4ff Compare October 23, 2025 09:04
@JojiiOfficial JojiiOfficial force-pushed the metrics-unindexed-vectors branch from 7001789 to a580671 Compare October 23, 2025 09:06
Comment on lines +200 to +204
.filter(|(_, vector_size_bytes, _)| {
// Filter out only large segments that do not support full-scan, as smaller segments can
// be searched quickly without using an index and are included in index-only searches.
*vector_size_bytes > search_optimized_threshold_bytes
})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thought: maybe we can add a function to segments itself that tells us whether to include it in indexed_only search. That would make the logic here simpler.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Implemented in #7463

I suggest to merge it into this one if you like it, then we can merge this PR as a whole.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We well merge it later to not block our current PR queue.

@timvisee timvisee force-pushed the metrics-unindexed-vectors branch from 938b102 to 2f25d7d Compare October 28, 2025 16:33
@JojiiOfficial JojiiOfficial force-pushed the metrics-points-per-collection branch from 086d4ff to bc6b84c Compare October 29, 2025 14:09
@JojiiOfficial JojiiOfficial force-pushed the metrics-unindexed-vectors branch from 2f25d7d to d382289 Compare October 29, 2025 14:14
@JojiiOfficial JojiiOfficial force-pushed the metrics-points-per-collection branch from bc6b84c to 9666988 Compare October 29, 2025 15:19
@JojiiOfficial JojiiOfficial force-pushed the metrics-unindexed-vectors branch from d382289 to 6a6386c Compare October 29, 2025 15:23
Base automatically changed from metrics-points-per-collection to dev October 29, 2025 15:40
@JojiiOfficial JojiiOfficial force-pushed the metrics-unindexed-vectors branch from 6a6386c to e7cbba6 Compare October 29, 2025 15:41
@timvisee timvisee merged commit c890d3a into dev Oct 29, 2025
12 checks passed
@timvisee timvisee deleted the metrics-unindexed-vectors branch October 29, 2025 15:44
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Oct 29, 2025

📝 Walkthrough

Walkthrough

This PR extends the telemetry system with tracking for vectors excluded from index-only searches. The change adds a new optional indexed_only_excluded_vectors field to LocalShardTelemetry that maps vector names to point counts. The field is populated conditionally based on telemetry detail level (Level4 and above) to avoid overhead. Simultaneously, the optimizer_logs boolean field is removed from TelemetryDetail and all related telemetry construction sites. The metrics pipeline is updated to emit a new collection_indexed_only_excluded_points gauge when data is available. The OpenAPI schema is updated to reflect the new telemetry field.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

  • lib/collection/src/shards/local_shard/telemetry.rs: Contains new get_index_only_excluded_vectors() helper function with logic to iterate segments and calculate excluded vectors above a computed threshold; includes level-gated conditional logic that skips collection for detail levels below Level4. This warrants careful review of the threshold calculation and exclusion logic.
  • src/common/metrics.rs: New metric emission logic for collection_indexed_only_excluded_points with per-vector labels; requires verification of metric family construction and guard conditions.
  • Coordinated removal of optimizer_logs: This breaking change spans TelemetryDetail struct, service API construction, and telemetry reporting initialization; verify all removal sites are consistent and no references remain.
  • Level-gating changes: Multiple locations now condition telemetry collection on detail.level >= Level4; verify the logic is applied uniformly and intentionally raises metrics level from Level3 to Level4 in service API.

Possibly related PRs

  • Telemetry improvements #6390: Continues a telemetry refactor series by adding level-gated collection to LocalShardTelemetry and further refining the removal of optimizer_logs from the same telemetry paths.
  • Improve telemetry logic and test #6399: Modifies the same telemetry structures (LocalShardTelemetry) and conditional per-shard/segment data gathering behavior, particularly in lib/collection/src/shards/telemetry.rs and local shard telemetry logic.

Suggested reviewers

  • generall
  • KShivendu
  • timvisee

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 71.43% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title Check ✅ Passed The title "Add unindexed vectors to telemetry and metrics API" is clear and directly describes the primary change in the pull request. The changeset adds a new optional field indexed_only_excluded_vectors to the telemetry schema, implements logic to calculate vectors excluded from indexed-only searches, and exposes this information via a new Prometheus metric (collection_indexed_only_excluded_points). The title accurately captures this main objective without being vague or overly broad, and it aligns with the stated PR objectives.
Description Check ✅ Passed The pull request description is related to the changeset and provides meaningful information about what is being added. It states that the PR "Adds information how many vectors will be excluded from requests with indexed_only enabled" and includes a concrete example of the new metric output. This aligns with the actual changes shown in the raw summary, where multiple files were modified to track and expose indexed_only_excluded_vectors through telemetry and metrics APIs. The description is sufficiently informative to convey the purpose of the changes.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch metrics-unindexed-vectors

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
lib/collection/src/shards/local_shard/telemetry.rs (1)

183-204: Defer “should this segment be included in indexed_only?” to segment API (once available).

Per earlier discussion, prefer a single per‑segment decision method to centralize logic (index presence + threshold checks). Since this was implemented in PR #7463, consider using it here when merged to avoid drift.

Example (pseudocode):

if !segment_guard.include_in_indexed_only(&vector_name, search_optimized_threshold_bytes) {
    // accumulate excluded
}
🧹 Nitpick comments (6)
src/actix/api/service_api.rs (1)

81-84: Confirm Level4 is required for metrics population

Raising TelemetryDetail.level to Level4 makes sense if indexed_only_excluded_vectors is only populated at Level4. Please confirm there’s no cheaper level that still provides this field, and consider noting this rationale in a short comment to avoid future regressions.

lib/collection/src/shards/telemetry.rs (1)

68-70: Telemetry privacy: do we anonymize map keys (vector names)?

indexed_only_excluded_vectors is HashMap<String, usize>. When anonymize() is requested, do HashMap keys get anonymized? If not, vector names may leak. Consider adding an anonymization strategy for keys or documenting that names are intended to be public.

Additionally, consider u64 for counts to avoid platform-dependent usize serialization differences.

docs/redoc/master/openapi.json (2)

12217-12225: Please document semantics and provide an example for indexed_only_excluded_vectors.

Add a short description clarifying:

  • It maps vector names to the number of points excluded when indexed_only=true.
  • The empty string key represents the default unnamed vector (if applicable).

Optional example improves discoverability.

Apply this minimal patch:

           "indexed_only_excluded_vectors": {
+            "description": "Per-vector counts of points excluded from search when `indexed_only` is enabled. Key is the vector name; use empty string \"\" for the default unnamed vector.",
             "type": "object",
             "additionalProperties": {
               "type": "integer",
               "format": "uint",
               "minimum": 0
             },
-            "nullable": true
+            "nullable": true,
+            "example": { "": 89500, "image-embeddings": 1200 }
           }

12217-12225: Rename struct field to align with metric and semantic meaning.

The struct field indexed_only_excluded_vectors represents point counts (confirmed by metric collection_indexed_only_excluded_points and its description). Rename the field to indexed_only_excluded_points in:

  • lib/collection/src/shards/telemetry.rs (line 69)
  • Update all usages in lib/collection/src/shards/local_shard/telemetry.rs and lib/collection/src/shards/dummy_shard.rs
  • Update docs/redoc/master/openapi.json to reflect the field name change

This aligns the struct field naming with the Prometheus metric collection_indexed_only_excluded_points and eliminates confusion about what the values represent.

lib/collection/src/shards/local_shard/telemetry.rs (2)

21-43: Good use of spawn_blocking; avoid name shadowing for clarity.

Logic is sound. Minor readability: don’t reuse segments for telemetry results (it shadows the earlier self.segments). Rename to segments_telemetry.

Apply:

-        let (segments, index_only_excluded_vectors) = segments_data.unwrap_or_default();
+        let (segments_telemetry, index_only_excluded_vectors) = segments_data.unwrap_or_default();
@@
-            segments: if segments.is_empty() {
+            segments: if segments_telemetry.is_empty() {
                 None
             } else {
-                Some(segments)
+                Some(segments_telemetry)
             },

Also applies to: 48-49, 80-84


171-198: Avoid unwrap; simplify error handling when fetching vector storage size.

Refactor to a match to remove unwrap() after the error branch and reduce footguns.

-                    let vector_storage_size =
-                        segment_guard.available_vectors_size_in_bytes(&vector_name);
-
-                    if let Err(err) = vector_storage_size {
-                        log::error!("Failed to get vector size from segment: {err:?}");
-                        return None;
-                    }
-
-                    let points = segment_guard.available_point_count();
-                    Some((vector_name, vector_storage_size.unwrap(), points))
+                    match segment_guard.available_vectors_size_in_bytes(&vector_name) {
+                        Ok(size_bytes) => {
+                            let points = segment_guard.available_point_count();
+                            Some((vector_name, size_bytes, points))
+                        }
+                        Err(err) => {
+                            log::error!("Failed to get vector size from segment: {err:?}");
+                            None
+                        }
+                    }
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between eff9107 and e7cbba6.

📒 Files selected for processing (8)
  • docs/redoc/master/openapi.json (1 hunks)
  • lib/collection/src/shards/dummy_shard.rs (1 hunks)
  • lib/collection/src/shards/local_shard/telemetry.rs (3 hunks)
  • lib/collection/src/shards/telemetry.rs (1 hunks)
  • lib/common/common/src/types.rs (0 hunks)
  • src/actix/api/service_api.rs (1 hunks)
  • src/common/metrics.rs (3 hunks)
  • src/common/telemetry_reporting.rs (0 hunks)
💤 Files with no reviewable changes (2)
  • lib/common/common/src/types.rs
  • src/common/telemetry_reporting.rs
🧰 Additional context used
📓 Path-based instructions (2)
**/*.rs

📄 CodeRabbit inference engine (.github/review-rules.md)

**/*.rs: Prefer explicit SomeType::from(x) over implicit x.into() in Rust code
Do not use transmute_from_u8, transmute_to_u8, transmute_from_u8_to_slice, transmute_from_u8_to_mut_slice, transmute_to_u8_slice in new code; use bytemuck or zerocopy instead

Files:

  • src/actix/api/service_api.rs
  • lib/collection/src/shards/telemetry.rs
  • lib/collection/src/shards/dummy_shard.rs
  • lib/collection/src/shards/local_shard/telemetry.rs
  • src/common/metrics.rs
**/src/**/*.rs

📄 CodeRabbit inference engine (.github/review-rules.md)

**/src/**/*.rs: Prefer exhaustive match arms over a catch-all _ arm to avoid missing new enum variants (except in tests/benchmarks or when provably safe)
Prefer explicit field ignoring with : _ over .. in struct patterns (except in tests/benchmarks or when provably safe)

Files:

  • src/actix/api/service_api.rs
  • lib/collection/src/shards/telemetry.rs
  • lib/collection/src/shards/dummy_shard.rs
  • lib/collection/src/shards/local_shard/telemetry.rs
  • src/common/metrics.rs
🧬 Code graph analysis (1)
lib/collection/src/shards/local_shard/telemetry.rs (3)
lib/segment/src/vector_storage/common.rs (1)
  • get_async_scorer (12-14)
lib/collection/src/shards/dummy_shard.rs (1)
  • get_telemetry_data (61-75)
lib/collection/src/shards/local_shard/mod.rs (1)
  • segments (271-273)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (10)
  • GitHub Check: rust-tests (ubuntu-latest)
  • GitHub Check: rust-tests (windows-latest)
  • GitHub Check: rust-tests-no-rocksdb (ubuntu-latest)
  • GitHub Check: rust-tests (macos-latest)
  • GitHub Check: e2e-tests
  • GitHub Check: test-shard-snapshot-api-s3-minio
  • GitHub Check: test-consistency
  • GitHub Check: integration-tests
  • GitHub Check: integration-tests-consensus
  • GitHub Check: lint
🔇 Additional comments (6)
lib/collection/src/shards/dummy_shard.rs (1)

73-73: Initialization of new telemetry field looks correct

Setting indexed_only_excluded_vectors: None keeps DummyShard aligned with the expanded telemetry without overhead.

lib/collection/src/shards/local_shard/telemetry.rs (5)

52-61: LGTM on optimizations aggregation.

Fold + telemetry retrieval is correct and efficient.


88-94: LGTM on Level4 gating and Option construction.

The .then(..) and .then_some(..) usage is clean and avoids unnecessary payload.


200-204: Semantics look correct for excluding only large unindexed segments.

Matches the intent to not count small segments under the threshold.


161-170: Unit mismatch concern is incorrect; overflow risk remains valid.

Both indexing_threshold and hnsw_config.full_scan_threshold use the same unit (KB). The HnswConfig field notes: 1 KB = 1 vector of size 256, and segments_searcher.rs treats both as KB without conversion.

However, use saturating_mul to prevent overflow when multiplying by BYTES_IN_KB:

-    let search_optimized_threshold_bytes = indexing_threshold.max(collection_config.hnsw_config.full_scan_threshold)
-        // convert KB to bytes
-        * BYTES_IN_KB;
+    let threshold_kb = indexing_threshold.max(collection_config.hnsw_config.full_scan_threshold) as u64;
+    let search_optimized_threshold_bytes =
+        threshold_kb.saturating_mul(BYTES_IN_KB as u64) as usize;

196-199: Now let me search for the actual telemetry.rs file to see the context more clearly:

Based on the shell script results and verification, I can now provide the rewritten review comment:

Use segment.available_vector_count(&vector_name)? to get accurate per-vector counts instead of total points.

The current code counts available_point_count() which returns total points in the segment, but when some points lack vector_name, this overstates the vector count for that name. The API segment.available_vector_count(vector_name) exists and should be used instead to obtain the accurate count of available vectors for the specific vector name.

let points = segment_guard.available_vector_count(&vector_name)?;
Some((vector_name, vector_storage_size.unwrap(), points))

If not already, confirm this method exists on the Segment trait and wire it through here. Also apply this to lines 205–211.

Comment on lines +154 to +157
/// Returns the number of vectors which will be excluded from requests with `indexed_only` enabled.
///
/// This effectively counts vectors in large unindexed segments.
fn get_index_only_excluded_vectors(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Docstring mismatches what is returned (points vs. vectors).

You aggregate counts of excluded points per vector name. Update docs to prevent confusion and align with the metric name.

-/// Returns the number of vectors which will be excluded from requests with `indexed_only` enabled.
+/// Returns the number of points that will be excluded from requests with `indexed_only` enabled.
 ///
-/// This effectively counts vectors in large unindexed segments.
+/// This effectively counts points in large segments where the given vector is unindexed.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
/// Returns the number of vectors which will be excluded from requests with `indexed_only` enabled.
///
/// This effectively counts vectors in large unindexed segments.
fn get_index_only_excluded_vectors(
/// Returns the number of points that will be excluded from requests with `indexed_only` enabled.
///
/// This effectively counts points in large segments where the given vector is unindexed.
fn get_index_only_excluded_vectors(
🤖 Prompt for AI Agents
In lib/collection/src/shards/local_shard/telemetry.rs around lines 154 to 157,
the docstring incorrectly says "vectors" while the code aggregates counts of
excluded points per vector name; update the docstring to state that it returns
the number of excluded points (not vectors), aggregated per vector name, and
mention that this corresponds to the metric name (index_only_excluded_points or
similar) to avoid confusion.

Comment on lines +272 to +293
let points_excluded_from_index_only = collection
.shards
.iter()
.flatten()
.filter_map(|shard| shard.local.as_ref())
.filter_map(|local| local.indexed_only_excluded_vectors.as_ref())
.flatten()
.fold(
HashMap::<&str, usize>::default(),
|mut acc, (name, vector_size)| {
*acc.entry(name).or_insert(0) += vector_size;
acc
},
);

for (name, vector_size) in points_excluded_from_index_only {
indexed_only_excluded.push(gauge(
vector_size as f64,
&[("id", &collection.id), ("vector", name)],
))
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Fix type mismatch: adding &usize instead of usize

vector_size here is &usize (from iterating a HashMap<String, usize>), so += vector_size won’t compile. Dereference it.

Apply this minimal fix:

-                .fold(
-                    HashMap::<&str, usize>::default(),
-                    |mut acc, (name, vector_size)| {
-                        *acc.entry(name).or_insert(0) += vector_size;
-                        acc
-                    },
-                );
+                .fold(
+                    HashMap::<&str, usize>::default(),
+                    |mut acc, (name, vector_size)| {
+                        *acc.entry(name).or_insert(0) += *vector_size;
+                        acc
+                    },
+                );

Optional clarity and lifetime simplification (own keys; also rename vector_size -> count):

-                .fold(
-                    HashMap::<&str, usize>::default(),
-                    |mut acc, (name, vector_size)| {
-                        *acc.entry(name).or_insert(0) += *vector_size;
-                        acc
-                    },
-                );
+                .fold(
+                    HashMap::<String, usize>::default(),
+                    |mut acc, (name, count)| {
+                        *acc.entry(name.clone()).or_insert(0) += *count;
+                        acc
+                    },
+                );
@@
-            for (name, vector_size) in points_excluded_from_index_only {
-                indexed_only_excluded.push(gauge(
-                    vector_size as f64,
-                    &[("id", &collection.id), ("vector", name)],
-                ))
-            }
+            for (name, count) in points_excluded_from_index_only {
+                indexed_only_excluded.push(gauge(
+                    count as f64,
+                    &[("id", &collection.id), ("vector", &name)],
+                ))
+            }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let points_excluded_from_index_only = collection
.shards
.iter()
.flatten()
.filter_map(|shard| shard.local.as_ref())
.filter_map(|local| local.indexed_only_excluded_vectors.as_ref())
.flatten()
.fold(
HashMap::<&str, usize>::default(),
|mut acc, (name, vector_size)| {
*acc.entry(name).or_insert(0) += vector_size;
acc
},
);
for (name, vector_size) in points_excluded_from_index_only {
indexed_only_excluded.push(gauge(
vector_size as f64,
&[("id", &collection.id), ("vector", name)],
))
}
}
let points_excluded_from_index_only = collection
.shards
.iter()
.flatten()
.filter_map(|shard| shard.local.as_ref())
.filter_map(|local| local.indexed_only_excluded_vectors.as_ref())
.flatten()
.fold(
HashMap::<&str, usize>::default(),
|mut acc, (name, vector_size)| {
*acc.entry(name).or_insert(0) += *vector_size;
acc
},
);
for (name, vector_size) in points_excluded_from_index_only {
indexed_only_excluded.push(gauge(
vector_size as f64,
&[("id", &collection.id), ("vector", name)],
))
}
Suggested change
let points_excluded_from_index_only = collection
.shards
.iter()
.flatten()
.filter_map(|shard| shard.local.as_ref())
.filter_map(|local| local.indexed_only_excluded_vectors.as_ref())
.flatten()
.fold(
HashMap::<&str, usize>::default(),
|mut acc, (name, vector_size)| {
*acc.entry(name).or_insert(0) += vector_size;
acc
},
);
for (name, vector_size) in points_excluded_from_index_only {
indexed_only_excluded.push(gauge(
vector_size as f64,
&[("id", &collection.id), ("vector", name)],
))
}
}
let points_excluded_from_index_only = collection
.shards
.iter()
.flatten()
.filter_map(|shard| shard.local.as_ref())
.filter_map(|local| local.indexed_only_excluded_vectors.as_ref())
.flatten()
.fold(
HashMap::<String, usize>::default(),
|mut acc, (name, count)| {
*acc.entry(name.clone()).or_insert(0) += *count;
acc
},
);
for (name, count) in points_excluded_from_index_only {
indexed_only_excluded.push(gauge(
count as f64,
&[("id", &collection.id), ("vector", &name)],
))
}
🤖 Prompt for AI Agents
In src/common/metrics.rs around lines 272 to 293, the fold closure adds a &usize
to an usize accumulator which causes a type mismatch; change the closure to
dereference the value when adding (e.g., *vector_size) and consider renaming the
variable to `count` for clarity or adjust the iterator to yield owned usize to
avoid borrowing issues.

timvisee added a commit that referenced this pull request Nov 14, 2025
* New metric for excluded vectors in indexed_only requests

* Make metric name align with prometheus naming convention

* Fix nit

* Fix tests

* Fix openapi

* Rename index_only into indexed_only

---------

Co-authored-by: timvisee <tim@visee.me>
@timvisee timvisee mentioned this pull request Nov 14, 2025
@coderabbitai coderabbitai bot mentioned this pull request Jan 4, 2026
3 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants