Skip to content
This repository was archived by the owner on Mar 4, 2026. It is now read-only.
This repository was archived by the owner on Mar 4, 2026. It is now read-only.

‎AggregateQuery looses nanoseconds in readTime #2330

@wvanderdeijl

Description

@wvanderdeijl

During unit tests we create sample documents and then very quickly execute queries using a readTime In some tests we simply use Date.now() for the readTime and found that tests sometimes fail to find the (very) recently created documents when using an AggregateQuery

After some debugging we think we found the issue. When executing a normal query you end up at Query.toProto and this uses runQueryRequest.readTime = transactionOrReadTime.toProto().timestampValue to set the readTime on the request:

runQueryRequest.readTime = transactionOrReadTime.toProto().timestampValue;

However, when using readTime with an AggregateQuery you end up in AggregateQuery.toProto and that uses runQueryRequest.readTime = transactionOrReadTime to set the readTime on the request:

runQueryRequest.readTime = transactionOrReadTime;

So, for the AggregateQuery the .toProto().timestampValue part is missing. This would have converted the nanoseconds property to nanos as required by protobuf. Without this conversion, nanos remains undefined and is translated to 0 over the wire.

Then end effect is that using a readTime of { _seconds: 1744812999, _nanoseconds: 978000000 } is sent to the backend as { _seconds: 1744812999, _nanoseconds: 0 } meaning it will not include documents in the aggregation query that were created between { _seconds: 1744812999, _nanoseconds: 0 } and { _seconds: 1744812999, _nanoseconds: 978000000 }. This is a small window (of max 1 second), but it is something we hit during our unit tests.

  1. Is this a client library issue or a product issue?
    This is a client library issue.

  2. Did someone already solve this?
    Could not find anything

  3. Do you have a support contract?
    Yes, support case #58916783 filed.

Environment details

  • OS: MacOS 15.3.2
  • Node.js version: 22.14.0
  • npm version: 10.9.2
  • @google-cloud/firestore version: 7.11.0

Steps to reproduce

Minimal steps to reproduce:

import { Firestore } from '@google-cloud/firestore';
import { Timestamp } from '@google-cloud/firestore/build/src/timestamp';
import assert from 'node:assert';
import { randomUUID } from 'node:crypto';

void (async () => {
    const firestore = new Firestore({ projectId: 'my-project' });
    const collection = firestore.collection(randomUUID());
    const doc = collection.doc();
    await doc.create({ some: 'data' });
    const query = collection.count() as import('@google-cloud/firestore/build/src/reference/aggregate-query').AggregateQuery<
        { count: FirebaseFirestore.AggregateField<number> },
        FirebaseFirestore.DocumentData,
        FirebaseFirestore.DocumentData
    >;
    // uncomment the line below to make the test succeed (since we wait for the next second)
    // await new Promise(resolve => setTimeout(resolve, 1000));
    const timestamp = Timestamp.fromMillis(Date.now());
    const count = await query._get(timestamp);
    // this assertion fails as 0 documents were found
    assert.strictEqual(count.result.data().count, 1);
})();

Making sure to follow these steps will guarantee the quickest resolution possible.

Thanks!

Metadata

Metadata

Assignees

Labels

api: firestoreIssues related to the googleapis/nodejs-firestore API.type: bugError or flaw in code with unintended results or allowing sub-optimal usage patterns.

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions