Skip to content

Conversation

@unnoq
Copy link
Member

@unnoq unnoq commented Nov 26, 2025

Summary by CodeRabbit

  • New Features

    • Cloudflare Durable Objects available as a publisher adapter with durable pub/sub and resume/replay support
    • new Durable-backed publisher for publish/subscribe use and a packaged experimental Durable Object publisher
    • Configurable customJsonSerializers option added to Redis publishers
  • Playground

    • ChatRoomV2 demo added
    • RPC client now includes automatic retry support
  • Documentation

    • Publisher docs updated with Durable Object examples and serializer guidance

✏️ Tip: You can customize this high-level summary in your review settings.

@unnoq unnoq requested a review from Copilot November 26, 2025 13:29
@dosubot dosubot bot added the size:XXL This PR changes 1000+ lines, ignoring generated files. label Nov 26, 2025
@coderabbitai
Copy link

coderabbitai bot commented Nov 26, 2025

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

Walkthrough

Adds a new package @orpc/experimental-publisher-durable-object implementing a Cloudflare Durable Object publisher (WebSocket pub/sub, resume storage, alarm lifecycle), a DurablePublisher client, docs updates, playground wiring, tests, and small public API additions to existing Redis publishers.

Changes

Cohort / File(s) Summary
Docs & Examples
apps/content/docs/helpers/publisher.md
Added PublisherDurableObject adapter entry, Cloudflare Durable Object usage example, and updated publisher examples to show customJsonSerializers option.
New Package: durable-object
packages/publisher-durable-object/*
New package manifest, README, build config, tsconfigs, .gitignore, and package exports for @orpc/experimental-publisher-durable-object.
Durable Object implementation
packages/publisher-durable-object/src/durable-object.ts, packages/publisher-durable-object/src/durable-object.test.ts
Added PublisherDurableObject class with fetch/subscribe/publish flows, resume storage integration, alarm handling, and comprehensive tests.
DurablePublisher client
packages/publisher-durable-object/src/durable-publisher.ts, packages/publisher-durable-object/src/durable-publisher.test.ts
New DurablePublisher client (publish/subscribe, serializer, prefix/getStubByName options) and tests for publish/subscribe/error cases.
Resume storage
packages/publisher-durable-object/src/resume-storage.ts, packages/publisher-durable-object/src/resume-storage.test.ts
Implemented ResumeStorage (SQLite-backed via DurableObjectState), retention/inactivity policies, schema management, alarm scheduling, and tests.
Types & Exports
packages/publisher-durable-object/src/{index.ts,types.ts,index.test.ts}
Added SerializedMessage type and package root exports re-exporting durable-object, durable-publisher, resume-storage, and types.
Test helpers
packages/publisher-durable-object/tests/shared.ts
Added in-memory DurableObjectState and WebSocket helpers for tests (better-sqlite3-backed mocks).
Workspace / Tooling
package.json, tsconfig.json, packages/publisher-durable-object/tsconfig*.json, packages/publisher-durable-object/build.config.ts
Added devDependency @cloudflare/vitest-pool-workers, excluded package from root TS build, and marked cloudflare:workers external in build config.
Playground wiring
playgrounds/cloudflare-worker/{wrangler.jsonc,package.json,worker/**,src/**}
Added PUBLISHER_DO binding/migration/compatibility flag, instantiated DurablePublisher in worker context, added message-v2 router and ChatRoomV2 UI, and added devDependencies referencing new packages.
Redis publishers API
@orpc/experimental-publisher/{ioredis,upstash-redis}
Public API change: added customJsonSerializers option (default []) to IORedisPublisher and UpstashRedisPublisher constructors/options.
Playground client pipeline
playgrounds/cloudflare-worker/src/lib/orpc.ts
Added ClientRetryPlugin to RPC link pipeline and updated client type to include plugin context; adjusted BatchLinkPlugin exclude predicate.
Misc tests / UI
playgrounds/cloudflare-worker/src/components/chat-room-v2.tsx, packages/.../*.test-d.ts
New ChatRoomV2 component and small test adjustments (non-runtime).

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant Client
  participant DurablePublisher
  participant DurableNS as Durable Namespace
  participant PublisherDO
  participant ResumeStorage
  Client->>DurablePublisher: publish(event, payload)
  DurablePublisher->>DurablePublisher: serialize payload
  DurablePublisher->>DurableNS: getStub(prefix+event)
  DurableNS-->>DurablePublisher: stub
  DurablePublisher->>PublisherDO: POST /publish (serialized)
  PublisherDO->>ResumeStorage: store(serialized)
  ResumeStorage-->>PublisherDO: stored event (id)
  PublisherDO->>PublisherDO: broadcast to active WebSockets
  PublisherDO-->>DurablePublisher: 200 OK
Loading
sequenceDiagram
  autonumber
  participant Subscriber
  participant PublisherDO
  participant ResumeStorage
  Subscriber->>PublisherDO: GET /subscribe (lastEventId header)
  PublisherDO->>PublisherDO: accept WebSocketPair
  PublisherDO->>ResumeStorage: getEventsAfter(lastEventId)
  ResumeStorage-->>PublisherDO: [eventN, eventN+1, ...]
  PublisherDO->>Subscriber: send replayed events over WebSocket
  Subscriber-->>PublisherDO: WebSocket established (101)
  PublisherDO->>PublisherDO: schedule/adjust alarms as needed
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

  • Pay extra attention to:
    • ResumeStorage SQL schema, TEXT ID handling, and alarm scheduling/rescheduling logic.
    • WebSocket lifecycle, per-send error handling, and cleanup in PublisherDurableObject.
    • Serialization/deserialization, event meta propagation, and DurablePublisher HTTP/websocket error paths.
    • Tests that mock DurableObjectState and in-memory SQLite behavior.

Possibly related PRs

  • unnoq/orpc#265 — Adds ClientRetryPlugin used by the playground client pipeline in this change.
  • unnoq/orpc#1094 — Modifies Redis publisher constructors/options; related to the customJsonSerializers additions.
  • unnoq/orpc#1047 — Related changes to BatchLinkPlugin exclude predicate also touched by this PR.

Poem

🐰
I hop through sockets, stow each little note,
Durable burrows guard every tiny float.
Replay at dawn, resume where you roam,
I nibble the bugs and guide messages home. 🥕

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and concisely summarizes the main change: adding a Cloudflare Durable Object adapter to the publisher feature.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/publisher/durable-object-adapter

📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3f5e971 and 6bd6b32.

📒 Files selected for processing (1)
  • playgrounds/cloudflare-worker/src/lib/orpc.ts (2 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
playgrounds/cloudflare-worker/src/lib/orpc.ts (3)
packages/client/src/plugins/retry.ts (2)
  • ClientRetryPlugin (53-199)
  • ClientRetryPluginContext (13-40)
packages/server/src/router-client.ts (1)
  • RouterClient (15-20)
packages/client/src/client.ts (1)
  • createORPCClient (17-43)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: publish-commit
  • GitHub Check: test
  • GitHub Check: lint
  • GitHub Check: Cloudflare Pages
🔇 Additional comments (4)
playgrounds/cloudflare-worker/src/lib/orpc.ts (4)

7-8: LGTM!

The imports for ClientRetryPluginContext and ClientRetryPlugin are correctly added and used in the file.


24-24: LGTM!

The ClientRetryPlugin is correctly added with default options, enabling retry functionality for the client.


28-28: LGTM!

The type annotation correctly adds ClientRetryPluginContext as the second generic parameter, exposing retry options to client users.


18-18: Fix the substring matching bug in the exclude predicate.

The current implementation uses path.includes('sse') and path.includes('on'), which checks if any path segment contains these strings as substrings. This will incorrectly exclude many unintended paths:

  • Paths with 'on' substring: ['session'], ['person'], ['done'], ['configuration'], etc.
  • Paths with 'sse' substring: ['classes'], ['messages'], ['assets'], etc.

Apply this diff to check for exact path segment matches:

-      exclude: ({ path }) => path.includes('sse') || path.includes('on'),
+      exclude: ({ path }) => path.some(segment => segment === 'sse' || segment === 'on'),

Alternatively, if you need to match path prefixes or multiple segments, please clarify the intended matching logic.

Likely an incorrect or invalid review comment.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @unnoq, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the oRPC publisher's capabilities by integrating with Cloudflare Durable Objects. This new adapter provides a distributed and stateful solution for real-time event handling, ensuring reliable event delivery and offering event replay functionality for resilient client connections. The changes involve a new package, core Durable Object implementation, and client-side integration, alongside updated documentation and comprehensive testing.

Highlights

  • New Cloudflare Durable Object Adapter: Introduces a new @orpc/experimental-publisher-durable-object package, providing a publisher adapter that leverages Cloudflare Durable Objects for robust and distributed event publishing and subscription.
  • Event Replay and Resume Functionality: The Durable Object adapter includes built-in resume functionality, allowing clients to reconnect and receive events that occurred while they were disconnected, based on a configurable retention period.
  • Custom JSON Serializers: Adds support for customJsonSerializers to the IORedisPublisher and UpstashRedisPublisher configurations, enhancing flexibility for handling complex data types.
  • Comprehensive Testing: Includes extensive unit tests for the PublisherDurableObject, DurablePublisher, and ResumeStorage components, along with end-to-end tests demonstrating the full publish/subscribe and resume flow within a Cloudflare Worker environment.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@cloudflare-workers-and-pages
Copy link

cloudflare-workers-and-pages bot commented Nov 26, 2025

Deploying orpc with  Cloudflare Pages  Cloudflare Pages

Latest commit: 6bd6b32
Status: ✅  Deploy successful!
Preview URL: https://17fe8003.orpc-1qh.pages.dev
Branch Preview URL: https://feat-publisher-durable-objec.orpc-1qh.pages.dev

View logs

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new, well-tested publisher adapter for Cloudflare Durable Objects. The implementation is robust and includes comprehensive unit, integration, and end-to-end tests. I've provided a few suggestions to further enhance robustness, type safety, observability, and the clarity of the documentation.

@pkg-pr-new
Copy link

pkg-pr-new bot commented Nov 26, 2025

More templates

@orpc/ai-sdk

npm i https://pkg.pr.new/@orpc/ai-sdk@1253

@orpc/arktype

npm i https://pkg.pr.new/@orpc/arktype@1253

@orpc/client

npm i https://pkg.pr.new/@orpc/client@1253

@orpc/contract

npm i https://pkg.pr.new/@orpc/contract@1253

@orpc/experimental-durable-iterator

npm i https://pkg.pr.new/@orpc/experimental-durable-iterator@1253

@orpc/hey-api

npm i https://pkg.pr.new/@orpc/hey-api@1253

@orpc/interop

npm i https://pkg.pr.new/@orpc/interop@1253

@orpc/json-schema

npm i https://pkg.pr.new/@orpc/json-schema@1253

@orpc/nest

npm i https://pkg.pr.new/@orpc/nest@1253

@orpc/openapi

npm i https://pkg.pr.new/@orpc/openapi@1253

@orpc/openapi-client

npm i https://pkg.pr.new/@orpc/openapi-client@1253

@orpc/otel

npm i https://pkg.pr.new/@orpc/otel@1253

@orpc/experimental-pino

npm i https://pkg.pr.new/@orpc/experimental-pino@1253

@orpc/experimental-publisher

npm i https://pkg.pr.new/@orpc/experimental-publisher@1253

@orpc/experimental-publisher-durable-object

npm i https://pkg.pr.new/@orpc/experimental-publisher-durable-object@1253

@orpc/experimental-ratelimit

npm i https://pkg.pr.new/@orpc/experimental-ratelimit@1253

@orpc/react

npm i https://pkg.pr.new/@orpc/react@1253

@orpc/react-query

npm i https://pkg.pr.new/@orpc/react-query@1253

@orpc/experimental-react-swr

npm i https://pkg.pr.new/@orpc/experimental-react-swr@1253

@orpc/server

npm i https://pkg.pr.new/@orpc/server@1253

@orpc/shared

npm i https://pkg.pr.new/@orpc/shared@1253

@orpc/solid-query

npm i https://pkg.pr.new/@orpc/solid-query@1253

@orpc/standard-server

npm i https://pkg.pr.new/@orpc/standard-server@1253

@orpc/standard-server-aws-lambda

npm i https://pkg.pr.new/@orpc/standard-server-aws-lambda@1253

@orpc/standard-server-fastify

npm i https://pkg.pr.new/@orpc/standard-server-fastify@1253

@orpc/standard-server-fetch

npm i https://pkg.pr.new/@orpc/standard-server-fetch@1253

@orpc/standard-server-node

npm i https://pkg.pr.new/@orpc/standard-server-node@1253

@orpc/standard-server-peer

npm i https://pkg.pr.new/@orpc/standard-server-peer@1253

@orpc/svelte-query

npm i https://pkg.pr.new/@orpc/svelte-query@1253

@orpc/tanstack-query

npm i https://pkg.pr.new/@orpc/tanstack-query@1253

@orpc/trpc

npm i https://pkg.pr.new/@orpc/trpc@1253

@orpc/valibot

npm i https://pkg.pr.new/@orpc/valibot@1253

@orpc/vue-colada

npm i https://pkg.pr.new/@orpc/vue-colada@1253

@orpc/vue-query

npm i https://pkg.pr.new/@orpc/vue-query@1253

@orpc/zod

npm i https://pkg.pr.new/@orpc/zod@1253

commit: 6bd6b32

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR introduces a Cloudflare Durable Object adapter for the publisher system, enabling real-time pub/sub functionality with event replay capabilities in Cloudflare Workers environments.

  • Adds new @orpc/experimental-publisher-durable-object package with client and server implementations
  • Implements resume/replay functionality using SQLite storage with configurable retention
  • Includes comprehensive test coverage with unit tests, integration tests, and end-to-end tests using Cloudflare's vitest pool

Reviewed changes

Copilot reviewed 23 out of 25 changed files in this pull request and generated 4 comments.

Show a summary per file
File Description
vitest.config.ts Configures test exclusions and project setup for Cloudflare Workers tests
tsconfig.json Excludes the new package from main TypeScript compilation
packages/publisher-durable-object/tsconfig.json TypeScript configuration for the package with Cloudflare Workers types
packages/publisher-durable-object/tsconfig.test.json TypeScript configuration for tests with vitest globals
packages/publisher-durable-object/tests/wrangler.jsonc Wrangler configuration for Durable Object test setup
packages/publisher-durable-object/tests/vitest.config.ts Vitest configuration using Cloudflare Workers pool
packages/publisher-durable-object/tests/shared.ts Test utilities for mocking Durable Object state
packages/publisher-durable-object/tests/e2e.ts E2E test worker implementing publisher with Durable Objects
packages/publisher-durable-object/tests/e2e.test.ts E2E tests for publish/subscribe functionality and resume support
packages/publisher-durable-object/src/types.ts Type definitions for serialized messages
packages/publisher-durable-object/src/resume-storage.ts SQLite-based storage for event replay with automatic cleanup
packages/publisher-durable-object/src/resume-storage.test.ts Comprehensive unit tests for resume storage functionality
packages/publisher-durable-object/src/index.ts Package exports
packages/publisher-durable-object/src/index.test.ts Basic export verification test
packages/publisher-durable-object/src/durable-publisher.ts Client-side publisher implementation for Durable Objects
packages/publisher-durable-object/src/durable-publisher.test.ts Unit tests for client-side publisher
packages/publisher-durable-object/src/durable-object.ts Server-side Durable Object implementation handling pub/sub
packages/publisher-durable-object/src/durable-object.test.ts Unit tests for Durable Object implementation
packages/publisher-durable-object/package.json Package metadata and dependencies
packages/publisher-durable-object/build.config.ts Build configuration externalizing Cloudflare Workers imports
packages/publisher-durable-object/README.md Package documentation
packages/publisher-durable-object/.gitignore Git ignore patterns
package.json Adds Cloudflare vitest pool workers dependency
apps/content/docs/helpers/publisher.md Adds documentation for Durable Object publisher usage

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🧹 Nitpick comments (14)
apps/content/docs/helpers/publisher.md (1)

151-156: Update the adapters table to include the new Cloudflare Durable Object adapter.

The available adapters table lists MemoryPublisher, IORedisPublisher, and UpstashRedisPublisher, but the new PublisherDurableObject adapter (with DurablePublisher client) should be included in this table for discoverability and consistency.

Add a row to the adapters table:

 | Name                    | Resume Support | Description                                                      |
 | ----------------------- | -------------- | ---------------------------------------------------------------- |
 | `MemoryPublisher`       | ✅             | A simple in-memory publisher                                     |
 | `IORedisPublisher`      | ✅             | Adapter for [ioredis](https://github.com/redis/ioredis)          |
 | `UpstashRedisPublisher` | ✅             | Adapter for [Upstash Redis](https://github.com/upstash/redis-js) |
+| `PublisherDurableObject`| ✅             | Adapter for [Cloudflare Durable Objects](https://developers.cloudflare.com/durable-objects/) |
packages/publisher-durable-object/README.md (1)

67-69: Expand package description.

The @orpc/experimental-publisher-durable-object section is minimal. Consider adding a brief description of key features (e.g., message resume/replay support, type safety) to match the detail level of other sections.

packages/publisher-durable-object/tests/shared.ts (1)

11-34: Consider more robust SQL parsing and verify rowsWritten calculation.

Two potential issues:

  1. Line 16: Using includes() for SQL command detection is fragile and could match keywords within string literals or comments.

  2. Line 32: The rowsWritten calculation adds table count delta to changes. This might double-count CREATE TABLE operations, as changes already reflects the operation. Verify this matches Cloudflare's actual Durable Object SQL behavior.

Consider refactoring to:

-          const method = query.includes('SELECT') || query.includes('RETURNING') ? 'all' : 'run'
+          const trimmed = query.trim().toUpperCase()
+          const method = trimmed.startsWith('SELECT') || query.includes('RETURNING') ? 'all' : 'run'

And verify the rowsWritten calculation against Cloudflare's documentation:

What does rowsWritten represent in Cloudflare Durable Objects SQL API?
packages/publisher-durable-object/tsconfig.test.json (1)

11-15: Verify the include pattern for tests directory.

The pattern "tests" may not recursively include all files under the tests directory in all TypeScript versions. Consider using "tests/**/*" for explicit recursive inclusion.

   "include": [
-    "tests",
+    "tests/**/*",
     "src/**/*.test.*",
     "src/**/*.test-d.ts"
   ]
packages/publisher-durable-object/tests/e2e.test.ts (1)

28-34: Consider distinguishing abort errors from unexpected failures.

The empty catch {} blocks will swallow all errors, including unexpected ones that might indicate bugs. Consider checking for abort specifically:

       try {
         for await (const event of subscription) {
           messages.push(event.message)
         }
       }
-      catch {}
+      catch (error) {
+        if (error instanceof Error && error.name !== 'AbortError') {
+          throw error
+        }
+      }
packages/publisher-durable-object/src/resume-storage.test.ts (1)

85-98: Consider reducing sleep duration or marking as slow test.

The sleep(2100) call makes this test take over 2 seconds. While necessary for testing retention expiration, consider:

  1. Adding a comment explaining the 2100ms value (retention 1s + buffer)
  2. Marking with it.skip or a slow test tag for CI optimization if needed

The test logic itself is correct for validating cleanup behavior.

packages/publisher-durable-object/src/durable-publisher.ts (1)

125-130: Consider specifying a close code for clean disposal.

When the disposer is called, consider using websocket.close(1000, 'Normal closure') to explicitly indicate a normal client-initiated close. This provides better debuggability and aligns with WebSocket conventions.

     return async () => {
-      websocket.close()
+      websocket.close(1000, 'Normal closure')
     }
packages/publisher-durable-object/tests/e2e.ts (1)

44-56: Consider adding type annotation instead of any for DurablePublisher.

Using DurablePublisher<any> at line 46 loses type safety. Consider defining a proper type for the publisher's event schema:

+type PublisherEvents = Record<string, { message: string }>
+
 export default {
   async fetch(request, env) {
-    const publisher = new DurablePublisher<any>(env.PUBLISHER_DO)
+    const publisher = new DurablePublisher<PublisherEvents>(env.PUBLISHER_DO)

This aligns with the type already defined in the base context at line 13.

packages/publisher-durable-object/src/resume-storage.ts (3)

92-102: Resilient error recovery with schema reset.

The catch-all retry pattern accepts potential data loss to prevent total failure. The comment acknowledges this trade-off. Consider logging the error for observability:

     catch (error) {
       /**
        * On error (disk full, ID overflow, etc.), reset schema and retry.
        * May cause data loss, but prevents total failure.
        */
+      console.warn('ResumeStorage: Error during store, resetting data:', error)
       await this.resetData()
       return insertEvent()
     }

143-145: Minor typo in comment.

Line 144: "trigger form alarm" should be "trigger from alarm".

   async alarm(): Promise<void> {
-    this.isInitedAlarm = true // trigger form alarm means it's already initialized
+    this.isInitedAlarm = true // trigger from alarm means it's already initialized
     await this.ensureReady()

199-205: Index on INTEGER PRIMARY KEY is redundant.

SQLite automatically creates an implicit index for INTEGER PRIMARY KEY AUTOINCREMENT columns. The index at line 200 (idx_events_id) is redundant and adds minor overhead. Consider removing it:

-    this.ctx.storage.sql.exec(`
-      CREATE INDEX IF NOT EXISTS "${this.schemaPrefix}idx_events_id" ON "${this.schemaPrefix}events" (id)
-    `)
-
     this.ctx.storage.sql.exec(`
       CREATE INDEX IF NOT EXISTS "${this.schemaPrefix}idx_events_stored_at" ON "${this.schemaPrefix}events" (stored_at)
     `)

The stored_at index is still valuable for the cleanup query's WHERE stored_at < unixepoch() - ? clause.

packages/publisher-durable-object/src/durable-object.ts (3)

18-24: Make publish routing more explicit (path + method).

request.url.includes('/publish') will match any URL containing that substring (e.g. /foo/publishers) and ignores HTTP method. If this DO is intended to handle only POST /publish, consider something along these lines:

const url = new URL(request.url)

if (request.method === 'POST' && url.pathname === '/publish') {
  return this.handlePublish(request)
}

return this.handleSubscribe(request)

This avoids accidental matches and makes future route changes safer.


26-41: Consider validating publish payload and mapping failures to clearer HTTP errors.

this.resumeStorage.store(await request.text()) will throw (via JSON.parse inside ResumeStorage.store) if the payload is not valid JSON for your SerializedMessage shape, which will surface as a generic 5xx. If this endpoint is exposed beyond trusted callers, it may be worth:

  • Catching JSON/shape-validation errors and returning 400 Bad Request with a short error message.
  • Letting unexpected storage/runtime errors still bubble as 5xx.

This gives clearer feedback to publishers without changing the happy path.


43-61: Tighten last-event-id handling for empty or malformed values.

The condition if (lastEventId !== null) means even an empty header value (Last-Event-ID: ) will trigger getEventsAfter(''), effectively replaying the full retained backlog. If that’s not what you want, you could gate on a non-empty value instead, e.g.:

const rawLastEventId = request.headers.get('last-event-id')
const lastEventId = rawLastEventId?.trim()

if (lastEventId) {
  const events = await this.resumeStorage.getEventsAfter(lastEventId)
  // ...
}

This makes the replay behavior more predictable when intermediaries or clients send blank headers.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f9e5dec and d5da450.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (24)
  • apps/content/docs/helpers/publisher.md (2 hunks)
  • package.json (1 hunks)
  • packages/publisher-durable-object/.gitignore (1 hunks)
  • packages/publisher-durable-object/README.md (1 hunks)
  • packages/publisher-durable-object/build.config.ts (1 hunks)
  • packages/publisher-durable-object/package.json (1 hunks)
  • packages/publisher-durable-object/src/durable-object.test.ts (1 hunks)
  • packages/publisher-durable-object/src/durable-object.ts (1 hunks)
  • packages/publisher-durable-object/src/durable-publisher.test.ts (1 hunks)
  • packages/publisher-durable-object/src/durable-publisher.ts (1 hunks)
  • packages/publisher-durable-object/src/index.test.ts (1 hunks)
  • packages/publisher-durable-object/src/index.ts (1 hunks)
  • packages/publisher-durable-object/src/resume-storage.test.ts (1 hunks)
  • packages/publisher-durable-object/src/resume-storage.ts (1 hunks)
  • packages/publisher-durable-object/src/types.ts (1 hunks)
  • packages/publisher-durable-object/tests/e2e.test.ts (1 hunks)
  • packages/publisher-durable-object/tests/e2e.ts (1 hunks)
  • packages/publisher-durable-object/tests/shared.ts (1 hunks)
  • packages/publisher-durable-object/tests/vitest.config.ts (1 hunks)
  • packages/publisher-durable-object/tests/wrangler.jsonc (1 hunks)
  • packages/publisher-durable-object/tsconfig.json (1 hunks)
  • packages/publisher-durable-object/tsconfig.test.json (1 hunks)
  • tsconfig.json (1 hunks)
  • vitest.config.ts (3 hunks)
🧰 Additional context used
🧬 Code graph analysis (5)
packages/publisher-durable-object/src/resume-storage.test.ts (2)
packages/publisher-durable-object/tests/shared.ts (1)
  • createDurableObjectState (4-54)
packages/shared/src/index.ts (1)
  • sleep (22-22)
packages/publisher-durable-object/tests/e2e.test.ts (2)
packages/server/src/router-client.ts (1)
  • RouterClient (15-20)
packages/client/src/client.ts (1)
  • createORPCClient (17-43)
packages/publisher-durable-object/src/durable-publisher.ts (5)
packages/publisher/src/publisher.ts (2)
  • PublisherOptions (4-19)
  • PublisherSubscribeListenerOptions (21-31)
packages/client/src/adapters/standard/rpc-json-serializer.ts (2)
  • StandardRPCJsonSerializerOptions (25-27)
  • StandardRPCJsonSerializer (29-214)
packages/publisher-durable-object/src/types.ts (1)
  • SerializedMessage (3-6)
packages/shared/src/json.ts (1)
  • stringifyJSON (9-12)
packages/shared/src/object.ts (1)
  • isTypescriptObject (57-59)
packages/publisher-durable-object/src/durable-object.ts (1)
packages/publisher-durable-object/src/resume-storage.ts (2)
  • ResumeStorageOptions (4-37)
  • ResumeStorage (39-223)
packages/publisher-durable-object/src/resume-storage.ts (2)
packages/publisher-durable-object/src/types.ts (1)
  • SerializedMessage (3-6)
packages/shared/src/json.ts (1)
  • stringifyJSON (9-12)
🪛 LanguageTool
packages/publisher-durable-object/README.md

[style] ~54-~54: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ... your API or implement API contract. - [@orpc/client](https://www.npmjs.com/package/@...

(ENGLISH_WORD_REPEAT_BEGINNING_RULE)


[style] ~55-~55: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ... API on the client with type-safety. - [@orpc/openapi](https://www.npmjs.com/package/...

(ENGLISH_WORD_REPEAT_BEGINNING_RULE)


[style] ~56-~56: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...I specs and handle OpenAPI requests. - [@orpc/otel](https://www.npmjs.com/package/@or...

(ENGLISH_WORD_REPEAT_BEGINNING_RULE)


[style] ~57-~57: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ....io/) integration for observability. - [@orpc/nest](https://www.npmjs.com/package/@or...

(ENGLISH_WORD_REPEAT_BEGINNING_RULE)


[style] ~58-~58: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ... with NestJS. - [@orpc/react](https://www.npmjs.com/package/@o...

(ENGLISH_WORD_REPEAT_BEGINNING_RULE)


[style] ~59-~59: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...with React and React Server Actions. - [@orpc/tanstack-query](https://www.npmjs.com/p...

(ENGLISH_WORD_REPEAT_BEGINNING_RULE)


[style] ~60-~60: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...stack.com/query/latest) integration. - [@orpc/experimental-react-swr](https://www.npm...

(ENGLISH_WORD_REPEAT_BEGINNING_RULE)


[style] ~61-~61: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...ttps://swr.vercel.app/) integration. - [@orpc/vue-colada](https://www.npmjs.com/packa...

(ENGLISH_WORD_REPEAT_BEGINNING_RULE)


[style] ~62-~62: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...ada](https://pinia-colada.esm.dev/). - [@orpc/hey-api](https://www.npmjs.com/package/...

(ENGLISH_WORD_REPEAT_BEGINNING_RULE)


[style] ~63-~63: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...I](https://heyapi.dev/) integration. - [@orpc/zod](https://www.npmjs.com/package/@orp...

(ENGLISH_WORD_REPEAT_BEGINNING_RULE)


[style] ~64-~64: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...tps://zod.dev/) doesn't support yet. - [@orpc/valibot](https://www.npmjs.com/package/...

(ENGLISH_WORD_REPEAT_BEGINNING_RULE)


[style] ~65-~65: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...rom Valibot. - [@orpc/arktype](https://www.npmjs.com/package/...

(ENGLISH_WORD_REPEAT_BEGINNING_RULE)

🪛 markdownlint-cli2 (0.18.1)
packages/publisher-durable-object/README.md

48-48: Link text should be descriptive

(MD059, descriptive-link-text)


75-75: Images should have alternate text (alt text)

(MD045, no-alt-text)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: Agent
  • GitHub Check: publish-commit
  • GitHub Check: lint
  • GitHub Check: Cloudflare Pages
🔇 Additional comments (47)
packages/publisher-durable-object/.gitignore (2)

1-4: Broad hidden-file pattern — verify against project conventions.

The .* rule on line 2 ignores all hidden files and directories. While the exceptions for .gitignore and .*.example are in place, other dotfiles (e.g., .npmrc, .eslintignore) typically shared at the monorepo root won't be ignored here. Verify this aligns with your project's structure and conventions.


6-22: Standard patterns for generated artifacts — LGTM.

The coverage of generated folders, build outputs, and Vite/Vitest-specific artifacts is solid and follows best practices.

apps/content/docs/helpers/publisher.md (1)

194-194: Documentation additions for customJsonSerializers look good.

The optional customJsonSerializers parameter is clearly documented in both the IORedisPublisher and UpstashRedisPublisher examples, providing users with visibility into serialization customization options.

Also applies to: 219-219

tsconfig.json (1)

42-42: LGTM!

The exclusion of the publisher-durable-object package is appropriate and consistent with other experimental packages in the monorepo.

packages/publisher-durable-object/tsconfig.json (1)

1-20: LGTM!

The TypeScript configuration is well-structured with appropriate Cloudflare Workers types and correct package references.

packages/publisher-durable-object/build.config.ts (1)

1-7: LGTM!

Externalizing cloudflare:workers is the correct approach as it's provided by the Cloudflare Workers runtime and should not be bundled.

packages/publisher-durable-object/src/types.ts (1)

1-6: LGTM!

The SerializedMessage interface is well-defined with appropriate types for serialized data and optional event metadata.

packages/publisher-durable-object/tests/shared.ts (1)

56-62: LGTM!

The WebSocket mock is simple and appropriate for testing purposes.

packages/publisher-durable-object/src/durable-publisher.test.ts (4)

5-80: LGTM!

The mock namespace implementation is comprehensive and well-structured, providing realistic simulation of Durable Object stubs with WebSocket support.


102-165: LGTM!

The publish tests provide comprehensive coverage of serialization, configuration options, and error handling scenarios.


167-256: LGTM!

The subscribe tests comprehensively cover various message formats, configuration options, and error conditions.


258-393: LGTM!

The error handling and edge case tests are thorough, covering various error scenarios and WebSocket close codes appropriately.

packages/publisher-durable-object/package.json (3)

1-30: LGTM!

The package metadata and export configuration are well-structured and follow monorepo conventions. Version 0.0.0 is appropriate for a new experimental package.


31-36: LGTM!

The build and type-checking scripts are standard and consistent with other packages in the monorepo.


37-47: Version ^4.20251118.0 of @cloudflare/workers-types is valid and current.

The version exists on npm and was released November 18, 2025 (within the current month). The latest available version is 4.20251126.0. The caret operator allows npm to automatically update to any compatible 4.x version, ensuring you receive patches and minor updates. No compatibility issues identified.

package.json (1)

30-30: Version ^0.10.10 is current and secure.

Verification confirms:

  • Latest available version is 0.10.11 (one patch release ahead)
  • The caret constraint (^) will automatically allow the upgrade to 0.10.11
  • No known vulnerabilities or security advisories for either version
  • Package is not deprecated

No action needed.

vitest.config.ts (1)

6-6: LGTM!

Good approach to isolate the Cloudflare Durable Object tests into a separate project configuration. Spreading defaultExclude ensures standard exclusions (like node_modules) are preserved while adding the package-specific exclusion.

Also applies to: 17-17, 69-69

packages/publisher-durable-object/src/index.test.ts (1)

1-9: LGTM!

The mock for cloudflare:workers is necessary since that module is only available in the Cloudflare runtime. Using dynamic import ensures the mock is established before the module loads.

Consider expanding this test to verify all expected exports if the public API surface is critical to maintain.

packages/publisher-durable-object/tests/wrangler.jsonc (1)

1-19: LGTM!

The Wrangler configuration is correctly structured for Durable Object testing with SQLite storage. The binding and migration setup looks appropriate.

Consider updating compatibility_date to a more recent date (e.g., 2024-11-01 or later) to ensure access to the latest Cloudflare Workers features and fixes, unless there's a specific reason to pin to 2024-01-01.

packages/publisher-durable-object/tests/vitest.config.ts (1)

1-12: LGTM!

The Cloudflare vitest pool workers configuration is correctly set up with the wrangler config path.

packages/publisher-durable-object/src/index.ts (1)

1-4: LGTM!

Clean barrel file providing a unified public API surface for the package.

packages/publisher-durable-object/tests/e2e.test.ts (1)

89-131: LGTM!

The resume subscription test properly validates that a subscriber with lastEventId: '1' receives messages starting from that event ID onwards. The test structure is well-organized with proper cleanup via AbortController.

packages/publisher-durable-object/src/resume-storage.test.ts (6)

1-4: LGTM! Well-structured test imports and setup.

The imports are appropriate for the test file, utilizing shared test utilities and vitest.


6-14: Good coverage of invalid retentionSeconds values.

The parameterized test covers important edge cases: zero, NaN, positive infinity, and negative values. This ensures the isEnabled getter correctly identifies disabled states.


111-132: Excellent resilience testing for ID overflow scenario.

This test properly validates the schema reset and retry logic when inserting near SQLite's INTEGER max value. The test correctly verifies that after reset, only the new event remains with ID '1'.


163-181: Good test for getEventsAfter pagination behavior.

The test properly validates that events are returned in order and the lastEventId filtering works correctly.


269-287: Thorough test for inactive cleanup with concurrency blocking.

The test correctly verifies that:

  1. blockConcurrencyWhile is called to prevent race conditions
  2. deleteAll is invoked when truly inactive
  3. No alarm is rescheduled after deletion

350-361: Good verification of database indexes.

Testing that both idx_events_id and idx_events_stored_at indexes are created ensures query performance for resume operations.

packages/publisher-durable-object/src/durable-object.test.ts (6)

4-8: LGTM! Proper mock for cloudflare:workers module.

The mock provides a minimal DurableObject base class that the PublisherDurableObject can extend during tests.


24-37: Clever workaround for Response status 101 limitation.

Since Node's Response class doesn't allow status 101, the MockResponse class uses 200 internally while exposing the intended status via property override. This correctly simulates Cloudflare's WebSocket upgrade response.


71-104: Good error isolation test for WebSocket broadcast.

The test correctly verifies that a failing WebSocket doesn't prevent messages from being sent to healthy WebSockets, and that errors are properly logged.


150-171: Correct test for replay opt-in behavior.

Verifying that events are not replayed without last-event-id header ensures clients must explicitly opt-in to replay functionality.


209-249: Thorough error handling test for replay failures.

The test validates that:

  1. Replay continues even when individual sends fail
  2. Errors are logged appropriately
  3. The WebSocket connection still completes successfully

252-264: LGTM! Alarm delegation test.

The test confirms that the alarm method properly forwards to resume storage, triggering cleanup when appropriate.

packages/publisher-durable-object/src/durable-publisher.ts (4)

9-23: Well-documented interface with sensible defaults.

The DurablePublisherOptions interface clearly documents the prefix and getStubByName options with appropriate defaults.


40-59: LGTM! Robust publish implementation.

The publish method correctly:

  1. Resolves the stub using the configured strategy
  2. Serializes the payload with metadata
  3. Handles HTTP errors with informative error messages

80-105: Comprehensive message data handling.

The message handler correctly handles all three possible data types from WebSocket messages:

  • Blob (async text conversion)
  • String (direct use)
  • ArrayBuffer (TextDecoder)

Deserialization errors are properly caught and forwarded to the onError callback.


107-115: Good handling of unexpected WebSocket closures.

Correctly filters out normal close codes (1000, 1001) and reports unexpected closures via the error callback.

packages/publisher-durable-object/tests/e2e.ts (2)

1-6: LGTM! Clean imports for E2E test harness.

The imports correctly bring in the necessary types and implementations for setting up the test environment.


29-37: Short retention appropriate for E2E tests.

The retentionSeconds: 1 is intentionally short for testing purposes, allowing rapid verification of retention behavior without long waits.

packages/publisher-durable-object/src/resume-storage.ts (5)

4-37: Excellent documentation for ResumeStorageOptions.

The JSDoc comments clearly explain each option's purpose, default values, and important remarks about behavior.


48-50: LGTM! Clean enabled state check.

The isEnabled getter correctly validates that retentionSeconds is a finite positive number.


73-90: Good handling of SQLite INTEGER overflow via TEXT casting.

Casting the auto-increment ID to TEXT ensures safe handling in JavaScript for very large IDs. The comment explains the rationale well.


164-170: Good use of blockConcurrencyWhile for safe deletion.

Using blockConcurrencyWhile prevents race conditions during cleanup. Resetting flags before deleteAll ensures proper re-initialization on subsequent operations.


220-222: LGTM! Clear alarm scheduling logic.

The alarm is scheduled at retentionSeconds + inactiveDataRetentionTime as documented, providing a reasonable cleanup window.

packages/publisher-durable-object/src/durable-object.ts (2)

5-16: Nice, minimal options surface and clean composition with ResumeStorage.

The constructor + PublisherDurableObjectOptions wiring is straightforward and keeps resume concerns nicely encapsulated in ResumeStorage. No issues here from a design or correctness standpoint.


64-66: Alarm delegation looks correct and keeps lifecycle logic centralized.

Forwarding alarm() directly to ResumeStorage.alarm() is a good separation of concerns and matches how ResumeStorage is structured around alarms and cleanup.

@codecov
Copy link

codecov bot commented Nov 26, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.

📢 Thoughts on this report? Let us know!

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (7)
packages/publisher-durable-object/src/durable-object.test.ts (5)

5-23: Global WebSocketPair mock is fine but tightly bound to global state

The mock pattern with mockServerWebSocket / mockClientWebSocket matches your use in tests and keeps control over both ends. Just be aware this relies on global mutable variables; if you later add parallel tests or reuse this helper elsewhere, consider extracting a small factory to avoid accidental cross-test interference.


25-38: MockResponse wrapper is clever but may hide real Response behavior differences

Overriding Response to bypass the “status 101 not allowed” constraint while preserving a logical status and webSocket works for your use case and keeps assertions simple. One subtlety: any code that inspects response.ok or other derived properties will now reflect the internal 200, not the logical 101; your current tests don’t rely on that, but it’s worth remembering if you extend them.


40-43: Consider restoring globals in afterAll for isolation

You reset mockServerWebSocket and mockClientWebSocket in beforeEach, which is good. Since you also patch globalThis.Response and globalThis.WebSocketPair once at module load, consider adding an afterAll to restore Response to OriginalResponse (and optionally delete WebSocketPair) to keep this suite from affecting any other tests that might run in the same process.


174-208: Replay test relies on implicit ID ordering; consider clarifying assumption

The replay test assumes that publishing three events yields IDs 1, 2, 3 so that last-event-id: '1' replays the latter two. That’s consistent with how ResumeStorage is likely implemented, but it’s an internal contract. If ResumeStorage ever changes to use non-numeric IDs, this test may become brittle. A small comment noting the dependency on numeric, incrementing IDs (or deriving the ID from the first publish) would make this more self-documenting.


253-264: Alarm test is a bit coupled to storage internals but acceptable

The test assumes that calling alarm() with no websockets and no events leads to state.storage.deleteAll being invoked. That matches current ResumeStorage behavior but is an internal policy. If you expect alarm semantics to evolve, you might instead assert that resumeStorage.alarm() is invoked once (via a spy) rather than specific storage operations. As written, it’s fine but a bit brittle to internals.

packages/publisher-durable-object/src/durable-publisher.test.ts (2)

5-80: Mock namespace and stub behavior look consistent with DurablePublisher expectations

The createMockNamespace and createMockStub helpers model:

  • getByName lazy stub creation,
  • stub.fetch handling both URL string + init and Request objects,
  • /publish path storing the raw body and broadcasting to websockets,
  • subscribe path returning { ok, webSocket, headers }.

This aligns with the real DurablePublisher contract and keeps tests focused on publisher behavior rather than network details. Good abstraction.


240-256: lastEventId header propagation test is accurate but could assert method/URL

You already verify that last-event-id is set to 42 by inspecting init.headers. That’s the key behavior. Optionally, you could also assert that the request path is /subscribe and method is GET/upgrade as expected, but that’s not strictly necessary here.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between d5da450 and aa73a69.

📒 Files selected for processing (4)
  • packages/publisher-durable-object/src/durable-object.test.ts (1 hunks)
  • packages/publisher-durable-object/src/durable-publisher.test.ts (1 hunks)
  • packages/publisher-durable-object/src/resume-storage.ts (1 hunks)
  • packages/publisher-durable-object/tsconfig.json (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • packages/publisher-durable-object/tsconfig.json
  • packages/publisher-durable-object/src/resume-storage.ts
🧰 Additional context used
🧬 Code graph analysis (2)
packages/publisher-durable-object/src/durable-publisher.test.ts (1)
packages/publisher-durable-object/src/durable-publisher.ts (1)
  • DurablePublisher (25-131)
packages/publisher-durable-object/src/durable-object.test.ts (2)
packages/publisher-durable-object/tests/shared.ts (2)
  • createWebSocket (56-62)
  • createDurableObjectState (4-54)
packages/publisher-durable-object/src/durable-object.ts (1)
  • PublisherDurableObject (10-67)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: publish-commit
  • GitHub Check: lint
  • GitHub Check: test
  • GitHub Check: Cloudflare Pages
🔇 Additional comments (20)
packages/publisher-durable-object/src/durable-object.test.ts (6)

47-70: Broadcast test covers the main path well

The test that subscribes twice and then publishes verifies that both mocked sockets receive the stored message, aligning with ctx.getWebSockets() semantics. Looks solid and mirrors the implementation correctly.


72-105: Good coverage of partial send failures during publish

You explicitly verify that:

  • a throwing send does not break the request (status 204),
  • console.error is logged with the expected prefix,
  • other websockets still receive the message.

That matches the implementation behavior and gives good confidence around error resilience.


107-130: Resume-storage test asserts only presence of meta.id

The test only checks that an ID is present in sentMessage.meta.id, which keeps it decoupled from internal ID format and is unlikely to be flaky. That’s a good balance between verifying resume behavior and not over-constraining implementation details.


133-149: WebSocket upgrade test nicely validates server/client wiring

By injecting both server and client websockets, then asserting status 101, response.webSocket reference, and acceptWebSocket(serverWs) call, you fully exercise the handleSubscribe handshake. This is in line with the durable-object implementation.


151-172: No-replay path behavior is well defined

The test ensuring no events are replayed when last-event-id is absent directly matches the guard if (lastEventId !== null) in handleSubscribe. Assertions are minimal and robust.


210-250: Replay error handling test correctly exercises logging and continuation

You simulate a failure on the first send, assert 101 status, log call with the expected prefix, and that both events were attempted. This closely mirrors the durable-object logic and gives good coverage of replay robustness.

packages/publisher-durable-object/src/durable-publisher.test.ts (14)

82-100: Custom serializer wiring for Person is clear and type-safe

You define a dedicated serializer entry (type 100) with a proper type guard and symmetric serialize/deserialize functions. This is a good, minimal example for exercising customJsonSerializers end‑to‑end.


102-128: Publish test validates full serialized payload and meta

The test asserts:

  • getByName is called with the event name,
  • one message is published,
  • the JSON matches expected structure for both data.json and data.meta,
  • top-level meta reflects event meta from withEventMeta.

This gives strong regression coverage of the serialization pipeline.


130-139: Prefix behavior for publish is correctly asserted

You only assert the getByName call with my-prefix:message, which is exactly the surface contract DurablePublisher provides. That’s sufficient and not over-specified.


141-152: Custom getStubByName test is precise and minimal

Verifying both that:

  • the hook is called with (namespace, 'message'), and
  • the namespace getByName is invoked with custom:message

confirms the intended override pathway without coupling to other behavior.


154-164: Publish-failure test correctly matches error message contract

By forcing stub.fetch to return a 500 and asserting the thrown message Failed to publish event: 500 Internal Server Error, the test directly pins the error format from DurablePublisher.publish. Looks accurate and stable.


167-181: Subscribe lifecycle test covers basic wiring and unsubscribe

You assert that:

  • the correct stub is selected,
  • accept is called on the websocket,
  • the unsubscribe function eventually calls close.

That’s exactly what you want from the subscribe API contract.


183-212: Message format matrix test is thorough and well-structured

Using it.each to cover Text, Blob, and ArrayBuffer inputs:

  • exercises all supported formats,
  • validates deserialization to Person via your custom serializer,
  • checks that event meta is propagated via getEventMeta.

The small async delay with setTimeout(0) is fine for the internal async handling. Overall, strong coverage.


214-226: Prefix behavior for subscribe is symmetric with publish

The test ensures getByName is called with events:message when subscribing, giving confidence that prefixing is consistently applied across publish/subscribe. Nicely aligned.


228-238: Missing-websocket case is correctly treated as an error

Overriding stub.fetch to return { ok: true, webSocket: null } and asserting that subscribe rejects with the expected error message matches the implementation’s defensive checks and ensures callers can rely on a thrown error in this situation.


258-280: Message-processing error path is well covered

Simulating invalid JSON and checking:

  • onError is called once with an Error instance,
  • the main listener is not called,

matches the intended behavior for deserialization failures and gives good safety around malformed inputs.


282-303: WebSocket error event handling test aligns with error message contract

You trigger the first registered error listener with a simple event object and assert that:

  • onError is called,
  • the Error message is 'Subscription websocket error',
  • error.cause is the original event.

This matches the durable-publisher implementation and nicely pins the semantics.


305-324: Unexpected close (1006) error handling is precisely validated

The test simulates an abnormal closure and ensures onError is invoked with the expected message. That’s exactly the kind of edge-case you want covered for long-lived subscriptions.


326-361: Normal/going-away close cases ensure no spurious errors

Verifying that onError is not called for codes 1000 and 1001 ensures callers won’t see noise for normal shutdown or navigation away. Good negative tests.


365-392: Meta-absent edge case test is a nice safeguard

The final edge-case ensures that when serialized.meta is missing, the listener still receives the payload and getEventMeta returns undefined. This pins a subtle but important behavior around optional metadata and is a great addition.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (2)
packages/publisher-durable-object/src/durable-publisher.ts (1)

94-94: Type safety concern with as any casts.

This is consistent with the previous review comment. The as any casts bypass type checking and should be removed by improving the SerializedMessage type definition.

packages/publisher-durable-object/src/resume-storage.ts (1)

144-175: Minor typo in alarm comment

Small nit: the comment reads // triggered form alarm means it's already initialized. It should be from alarm.

Not functionally important, but fixing it will avoid confusion and aligns with an earlier review note.

🧹 Nitpick comments (4)
packages/publisher-durable-object/src/durable-publisher.ts (3)

54-58: Consider including event name in error message for better debugging.

The error message could be more descriptive by including the event name, making it easier to diagnose issues in production logs.

-      throw new Error(`Failed to publish event: ${response.status} ${response.statusText}`, {
+      throw new Error(`Failed to publish event "${event}": ${response.status} ${response.statusText}`, {

74-78: Consider including event name in error message for consistency.

Similar to the publish method, including the event name would help with debugging.

-      throw new Error('Failed to open subscription websocket to publisher durable object', {
+      throw new Error(`Failed to open subscription websocket for event "${event}"`, {

80-91: Consider extracting data type handling into a helper function.

The logic to handle Blob, string, and ArrayBuffer data types is clear but adds cognitive overhead in the message handler. Extracting it to a helper function would improve readability.

+  private async getWebSocketMessageData(event: MessageEvent): Promise<string> {
+    if (event.data instanceof Blob) {
+      return await event.data.text()
+    }
+    if (typeof event.data === 'string') {
+      return event.data
+    }
+    return new TextDecoder().decode(event.data)
+  }
+
   protected async subscribeListener<K extends keyof T & string>(event: K, listener: (payload: T[K]) => void, options?: PublisherSubscribeListenerOptions): Promise<() => Promise<void>> {
     // ...
     websocket.addEventListener('message', async (event) => {
       try {
-        let data: string
-        if (event.data instanceof Blob) {
-          data = await event.data.text()
-        }
-        else if (typeof event.data === 'string') {
-          data = event.data
-        }
-        else {
-          data = new TextDecoder().decode(event.data)
-        }
+        const data = await this.getWebSocketMessageData(event)
packages/publisher-durable-object/src/resume-storage.test.ts (1)

111-142: Consider using afterEach hook instead of manual try/finally for cleaner spy restoration

In the "resets schema on insert error and retries" test, the spy is created and used throughout the entire test, then manually restored. While wrapping in try/finally would prevent leakage if an assertion throws, Vitest best practices recommend using an afterEach hook instead for cleaner, more idiomatic cleanup:

afterEach(() => {
  vi.restoreAllMocks()
})

This ensures all spies are automatically restored after every test without requiring manual management in each test. If you prefer per-test control or the spy is only needed for part of a test, then try/finally with mockRestore() in the finally block is the appropriate fallback.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between aa73a69 and c9438fe.

📒 Files selected for processing (3)
  • packages/publisher-durable-object/src/durable-publisher.ts (1 hunks)
  • packages/publisher-durable-object/src/resume-storage.test.ts (1 hunks)
  • packages/publisher-durable-object/src/resume-storage.ts (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
packages/publisher-durable-object/src/resume-storage.ts (2)
packages/publisher-durable-object/src/types.ts (1)
  • SerializedMessage (3-6)
packages/shared/src/json.ts (1)
  • stringifyJSON (9-12)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: test
  • GitHub Check: lint
  • GitHub Check: publish-commit
  • GitHub Check: Cloudflare Pages
🔇 Additional comments (8)
packages/publisher-durable-object/src/durable-publisher.ts (2)

1-23: LGTM! Clean interface design.

The imports and interface definition follow good composition patterns. The optional prefix and getStubByName hook provide flexibility while maintaining sensible defaults.


25-38: LGTM! Constructor properly initializes all fields.

The constructor correctly extends the base Publisher class and initializes all fields with sensible defaults.

packages/publisher-durable-object/src/resume-storage.test.ts (2)

6-383: Thorough coverage of resume storage lifecycle

The test suite does a good job exercising disabled mode, lazy schema creation, ID/meta handling, retention cleanup, alarm behavior, and idempotence. This gives strong confidence in the durability and replay semantics; no functional issues from the tests themselves.


85-99: Consider using Vitest fake timers instead of multi‑second sleeps in retention tests

The tests at lines 85–99 and 279–319 rely on real sleep() calls (2100ms and 2000ms respectively). While this works, it slows the test suite and can introduce flakiness under load. Vitest's fake timer APIs (useFakeTimers(), advanceTimersByTime()) are a cleaner alternative that maintains test isolation without actual delays.

Alternatively, reduce retentionSeconds to a much smaller value (e.g., 100ms) with correspondingly shorter sleeps.

packages/publisher-durable-object/src/resume-storage.ts (4)

4-59: Options and enablement semantics look solid

The ResumeStorageOptions defaults and isEnabled check correctly treat non‑finite/zero/negative retentionSeconds as “disabled” while still allowing configuration of inactiveDataRetentionTime and schemaPrefix. This matches the tests’ expectations and keeps the disabled path cheap (no schema/alarm work).


177-219: Based on the Cloudflare Durable Objects documentation I retrieved, I can now evaluate the review comment's verification request.

The alarm scheduling and cleanup pattern aligns with Cloudflare DO platform guarantees

From the documentation:

  1. Single-alarm model matches the code's strategy: Cloudflare DO alarms allow one scheduled alarm per instance at a time. The pattern described (using scheduleAlarm() with retentionSeconds + inactiveDataRetentionTime) correctly assumes a single alarm is maintained.

  2. Deferred cleanup via lastCleanupTime gating is sound: Best practices recommend batching deletions and deferring expensive operations. The code's approach of checking lastCleanupTime + retentionSeconds * 1000 > now before executing DELETE statements matches Cloudflare's recommended pattern of keeping alarm handler work bounded and efficient.

  3. At-least-once alarm delivery requires idempotency: The code's lastCleanupTime gating naturally provides idempotency—repeated alarm deliveries (due to at-least-once semantics with retries) will re-check the same time window, making redundant DELETEs safe.

  4. Schema reset before deleteAll() is correct: Cloudflare documentation explicitly states that deleteAlarm() + deleteAll() must be called together to fully remove a Durable Object. Resetting isInitedSchema and isInitedAlarm before this operation ensures clean re-initialization on subsequent operations, which aligns with platform expectations.

However, manual code inspection is still recommended to verify:

  • The exact calculation in scheduleAlarm() for retentionSeconds + inactiveDataRetentionTime respects DO timeout/execution constraints
  • Whether blockConcurrencyWhile is used during schema initialization (mentioned in tests)
  • Error handling if setAlarm() or DELETE operations fail

64-104: Based on my verification:

  1. CAST(id AS TEXT) Pattern: ✓ Confirmed as correct approach per Cloudflare Durable Objects documentation. SQLite RETURNING clause supports casting expressions, and explicit TEXT casting is the reliable way to ensure JSON serialization returns a string value.

  2. JavaScript Integer Overflow Risk: ✓ Confirmed legitimate concern. SQLite INTEGER range (−9.2 × 10^18 to +9.2 × 10^18) far exceeds JavaScript's safe integer range (±9 × 10^15). Casting to TEXT and handling as strings is the recommended pattern.

  3. Symmetric Pattern: ✓ The code shows consistent handling - storing id as TEXT and re-injecting into meta on writes preserves the string representation and existing meta fields.

  4. Error Handling: ✓ The reset + retry pattern is sound - on insertion failure it resets schema and retries, with proper error logging.

The review comment's claims are technically accurate and the implementation follows Cloudflare's recommended practices.


Store/getEventsAfter ID handling follows Cloudflare best practices

The pattern of inserting payload, fetching CAST(id AS TEXT) on writes, and re-injecting id into meta is correct. It safely avoids JS integer overflow and ensures callers receive an authoritative string id while preserving existing meta fields. The error-path reset + retry in store is appropriately guarded and logs triggering errors.


177-225: Based on my web search verification, I can now provide a rewritten review comment. The search results confirm that the concern about SQL identifier handling is valid and aligns with SQLite best practices, particularly for Cloudflare Durable Objects which use embedded SQLite.

Validate and properly quote schemaPrefix in SQL identifiers

The schemaPrefix is interpolated directly into SQL identifiers without quoting or escaping:

CREATE TABLE IF NOT EXISTS "${this.schemaPrefix}events" (...)

SQLite best practices require that identifiers be properly quoted with double-quotes and internal quotes escaped (doubled). The current code does not do this. If schemaPrefix contains quotes or special characters, the SQL could break or potentially be exploited.

Add validation to ensure schemaPrefix matches a conservative pattern (e.g., /^[A-Za-z0-9_]+$/), or properly quote and escape the identifier before interpolation:

const escapedPrefix = this.schemaPrefix.replace(/"/g, '""');
CREATE TABLE IF NOT EXISTS "${escapedPrefix}events" (...)

Combining both approaches (validation + quoting) provides defense in depth.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
apps/content/docs/helpers/publisher.md (1)

243-254: Complete the fetch handler example with usage and return statement.

The fetch handler creates a DurablePublisher instance but doesn't demonstrate how to use it or return a response. This leaves developers without a clear integration pattern.

Note: This issue has been extensively flagged in previous review comments by multiple reviewers.

🧹 Nitpick comments (1)
packages/publisher-durable-object/src/resume-storage.ts (1)

193-195: Remove redundant index on PRIMARY KEY.

The id column is already defined as PRIMARY KEY on line 187, which implicitly creates an index in SQLite. Creating an explicit index on it is redundant and wastes storage space.

Apply this diff to remove the redundant index:

-    this.ctx.storage.sql.exec(`
-      CREATE INDEX IF NOT EXISTS "${this.schemaPrefix}idx_events_id" ON "${this.schemaPrefix}events" (id)
-    `)
-
     this.ctx.storage.sql.exec(`
       CREATE INDEX IF NOT EXISTS "${this.schemaPrefix}idx_events_stored_at" ON "${this.schemaPrefix}events" (stored_at)
     `)
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between cfd631d and 1700eb0.

📒 Files selected for processing (4)
  • apps/content/docs/helpers/publisher.md (3 hunks)
  • packages/publisher-durable-object/src/resume-storage.test.ts (1 hunks)
  • packages/publisher-durable-object/src/resume-storage.ts (1 hunks)
  • playgrounds/cloudflare-worker/wrangler.jsonc (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • packages/publisher-durable-object/src/resume-storage.test.ts
🧰 Additional context used
🧬 Code graph analysis (1)
packages/publisher-durable-object/src/resume-storage.ts (2)
packages/publisher-durable-object/src/types.ts (1)
  • SerializedMessage (3-6)
packages/shared/src/json.ts (1)
  • stringifyJSON (9-12)
🪛 LanguageTool
apps/content/docs/helpers/publisher.md

[style] ~258-~258: Consider an alternative adverb to strengthen your wording.
Context: ... abort signals, which are necessary for properly cleaning up subscriptions. ```json { ...

(PROPERLY_THOROUGHLY)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
  • GitHub Check: Agent
  • GitHub Check: CodeQL analysis (javascript-typescript)
🔇 Additional comments (11)
playgrounds/cloudflare-worker/wrangler.jsonc (1)

10-12: The enable_request_signal compatibility flag is valid and properly configured.

Verification confirms that enable_request_signal is a documented Cloudflare Workers compatibility flag that enables the Request.signal property for handling abort/cancel events on incoming requests. The flag is correctly placed in the compatibility_flags array and the syntax is valid. No issues identified.

packages/publisher-durable-object/src/resume-storage.ts (7)

1-37: LGTM!

The imports and interface definition are well-structured with comprehensive documentation. The default values are sensible and the remarks provide helpful context about performance characteristics and edge cases.


39-59: LGTM!

The class structure is clean with appropriate encapsulation. The isEnabled getter correctly handles edge cases (Infinity, NaN, negative values) to disable resume functionality.


64-104: LGTM!

The store method correctly handles event persistence with proper error recovery. The use of parameterized queries prevents SQL injection, and the TEXT casting strategy for IDs is well-documented and necessary for JavaScript number safety. The error handling with schema reset is an acceptable trade-off for resilience.


109-138: LGTM!

The replay logic correctly retrieves and reconstructs events with TEXT-casted IDs. The ascending order ensures proper event sequence for resume functionality.


144-175: LGTM!

The alarm method correctly implements inactivity-based cleanup with proper concurrency control. The use of blockConcurrencyWhile ensures atomic checking and deletion, preventing race conditions between alarm execution and new events.


208-220: LGTM!

The deferred cleanup logic is well-designed for performance. The condition correctly prevents excessive cleanup operations while ensuring expired events are eventually removed. The stored_at index supports efficient deletion of old events.


222-231: LGTM!

Both helper methods are correctly implemented. The resetSchema method properly resets state for re-initialization, and scheduleAlarm correctly calculates the combined retention and inactivity window.

apps/content/docs/helpers/publisher.md (3)

151-156: LGTM!

The table update correctly documents the new PublisherDurableObject adapter with accurate information about resume support and a proper reference link.


195-195: LGTM!

The customJsonSerializers option is consistently documented across Redis publisher examples, clearly marked as optional with sensible empty array defaults.

Also applies to: 220-220


257-272: LGTM!

The warnings and notes about enable_request_signal compatibility flag and resume support configuration are helpful and accurate. They provide essential information for developers implementing the Durable Object adapter.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 19 out of 21 changed files in this pull request and generated 12 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
packages/publisher-durable-object/src/resume-storage.ts (1)

109-125: Validate or document expected format for lastEventId

getEventsAfter passes lastEventId directly into WHERE id > ? without validation. An unexpected value (e.g. non-numeric string) can lead to surprising SQL semantics (SQLite will coerce, often resulting in “give me everything after 0”), which may not match caller expectations. This was already flagged in a previous review and is still applicable.

If you intend lastEventId to be a numeric string, consider either:

  1. Validate and short‑circuit:
   async getEventsAfter(lastEventId: string): Promise<string[]> {
     if (!this.isEnabled) {
       return []
     }
 
+    // Defensive guard: only accept plain decimal IDs, otherwise treat as "no events".
+    if (!/^\d+$/.test(lastEventId)) {
+      return []
+    }
+
     await this.ensureSchemaAndCleanup()
@@
-    const result = this.ctx.storage.sql.exec(`
+    const result = this.ctx.storage.sql.exec(`
       SELECT CAST(id AS TEXT) as id, payload
       FROM "${this.schemaPrefix}events"
       WHERE id > ?
       ORDER BY id ASC
     `, lastEventId)
  1. **Or, at minimum, document in the JSDoc that lastEventId must be the stringified numeric id previously returned by store, and that other values are unsupported.
🧹 Nitpick comments (1)
packages/publisher-durable-object/src/resume-storage.ts (1)

30-37: Sanitize schemaPrefix to avoid SQL identifier injection / malformed SQL

schemaPrefix is interpolated directly into SQL identifiers for tables and indexes:

  • Table: "${this.schemaPrefix}events"
  • Indexes: "${this.schemaPrefix}idx_events_id", "${this.schemaPrefix}idx_events_stored_at"
  • DDL/DML: DROP TABLE IF EXISTS, DELETE FROM, etc.

If schemaPrefix is ever constructed from untrusted or loosely controlled input, characters like quotes, semicolons, or whitespace could break the query or open a path to SQL‑injection‑style issues (even though parameters are used for values).

A lightweight runtime check in the constructor keeps things safe while still flexible:

   constructor(
     private readonly ctx: DurableObjectState,
     options: ResumeStorageOptions = {},
   ) {
-    this.retentionSeconds = options.retentionSeconds ?? 0
-    this.inactiveDataRetentionTime = options.inactiveDataRetentionTime ?? 6 * 60 * 60
-    this.schemaPrefix = options.schemaPrefix ?? 'orpc:publisher:resume:'
+    this.retentionSeconds = options.retentionSeconds ?? 0
+    this.inactiveDataRetentionTime = options.inactiveDataRetentionTime ?? 6 * 60 * 60
+
+    const schemaPrefix = options.schemaPrefix ?? 'orpc:publisher:resume:'
+    // Allow a conservative set of characters for identifiers to keep SQL well‑formed.
+    if (!/^[A-Za-z0-9_:]+$/.test(schemaPrefix)) {
+      throw new Error('ResumeStorage: schemaPrefix must match /^[A-Za-z0-9_:]+$/')
+    }
+    this.schemaPrefix = schemaPrefix
   }

This preserves current defaults while preventing accidental injection or syntax errors from a bad prefix.

Also applies to: 78-80, 120-123, 185-199, 217-223

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1700eb0 and bfd1177.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (1)
  • packages/publisher-durable-object/src/resume-storage.ts (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: publish-commit
  • GitHub Check: lint
  • GitHub Check: test
  • GitHub Check: Cloudflare Pages
🔇 Additional comments (1)
packages/publisher-durable-object/src/resume-storage.ts (1)

4-59: API surface, defaults, and isEnabled gating look solid

The options interface, sensible defaults, and isEnabled getter cleanly separate “resume disabled” from the normal path. Constructor wiring is straightforward and easy to reason about.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (6)
packages/tanstack-query/src/procedure-utils.vue.test-d.ts (2)

136-149: Same issue as above: unrelated test disabling.

This segment repeats the same pattern of disabling tests for select inside computed, now for .experimental_streamedOptions. The same concerns apply regarding test coverage and PR scope.


202-215: Same issue as above: unrelated test disabling.

This is the third occurrence of the same pattern, now for .experimental_liveOptions. While the consistency suggests a deliberate approach to a known issue, the concerns about PR scope and test coverage remain.

packages/publisher-durable-object/src/durable-publisher.ts (4)

80-104: Critical: Execution continues after deserialization error, and errors are silently swallowed.

Two critical issues already flagged in past reviews:

  1. Lines 96: After a deserialization error on lines 93-94, execution continues to call listener(payloadWithMeta as T[K]) with potentially undefined values. A return statement is needed after the error callback.

  2. Lines 98-104: When options?.onError is not provided, deserialization errors are completely swallowed with no logging or indication of failure.

Apply this diff to fix both issues:

      catch (error) {
-        options?.onError?.(
-          new Error('Failed to deserialize message from publisher durable object', {
-            cause: error,
-          }),
-        )
+        const wrappedError = new Error('Failed to deserialize message from publisher durable object', {
+          cause: error,
+        })
+        if (options?.onError) {
+          options.onError(wrappedError)
+        } else {
+          console.error('Unhandled error in DurablePublisher subscription:', wrappedError)
+        }
+        return
      }

107-123: WebSocket error handlers silently swallow errors.

Already flagged in past reviews: Both close and error handlers call options?.onError?.(), causing errors to be silently ignored when no handler is provided. Additionally, code 1005 (no status received) should be treated as a normal closure.

Apply this diff:

    websocket.addEventListener('close', (event) => {
-      if (event.code !== 1000 && event.code !== 1001) {
-        options?.onError?.(
-          new Error(`WebSocket closed unexpectedly: ${event.code} ${event.reason}`, {
-            cause: event,
-          }),
-        )
+      if (event.code !== 1000 && event.code !== 1001 && event.code !== 1005) {
+        const error = new Error(`WebSocket closed unexpectedly: ${event.code} ${event.reason}`, {
+          cause: event,
+        })
+        if (options?.onError) {
+          options.onError(error)
+        } else {
+          console.error('Unhandled WebSocket close error:', error)
+        }
      }
    })

    websocket.addEventListener('error', (event) => {
-      options?.onError?.(
-        new Error(`Subscription websocket error`, {
-          cause: event,
-        }),
-      )
+      const error = new Error(`Subscription WebSocket error`, {
+        cause: event,
+      })
+      if (options?.onError) {
+        options.onError(error)
+      } else {
+        console.error('Unhandled WebSocket error:', error)
+      }
    })

80-129: Memory leak: WebSocket event listeners are never removed.

Already flagged in past reviews: The subscribeListener method attaches event listeners to the WebSocket but never removes them. When the unsubscribe function is called, only websocket.close() is invoked, but the event listeners remain attached.

Consider using an AbortController for cleanup:

  protected async subscribeListener<K extends keyof T & string>(event: K, listener: (payload: T[K]) => void, options?: PublisherSubscribeListenerOptions): Promise<() => Promise<void>> {
    const stub = this.getStubByName(this.namespace, this.prefix + event)

+   const abortController = new AbortController()
    const headers = new Headers({ upgrade: 'websocket' })
    if (options?.lastEventId !== undefined) {
      headers.set('last-event-id', options.lastEventId)
    }
    const response = await stub.fetch('http://localhost/subscribe', {
      headers,
    })

    const websocket = response.webSocket

    if (!websocket) {
      throw new Error('Failed to open subscription websocket to publisher durable object', {
        cause: response,
      })
    }

-   websocket.addEventListener('message', async (event) => {
+   websocket.addEventListener('message', async (event) => {
      // ... message handler
-   })
+   }, { signal: abortController.signal })

-   websocket.addEventListener('close', (event) => {
+   websocket.addEventListener('close', (event) => {
      // ... close handler
-   })
+   }, { signal: abortController.signal })

-   websocket.addEventListener('error', (event) => {
+   websocket.addEventListener('error', (event) => {
      // ... error handler
-   })
+   }, { signal: abortController.signal })

    websocket.accept()

    return async () => {
+     abortController.abort()
      websocket.close()
    }
  }

125-125: Potential resource leak if accept() throws.

Already flagged in past reviews: If websocket.accept() throws an exception, the event listeners remain attached and the WebSocket is not properly cleaned up.

Wrap in try-catch:

+   try {
      websocket.accept()
+   } catch (error) {
+     websocket.close()
+     throw error
+   }
🧹 Nitpick comments (6)
playgrounds/cloudflare-worker/src/lib/orpc.ts (2)

13-25: Confirm retry semantics and idempotency for ClientRetryPlugin

Placing new ClientRetryPlugin() last in the plugins array makes it the outermost layer around the batched call, which is usually desirable for retries. Please double‑check that the plugin’s default behaviour (retry conditions, max attempts, backoff, etc.) is acceptable for your mix of operations, especially any non‑idempotent mutations or durable-iterator/streaming style calls, and tighten its configuration if needed.


28-28: Validate RouterClient context type and consider reducing manual annotation drift

Typing the client as RouterClient<typeof router, ClientRetryPluginContext> gives nice downstream ergonomics, but it assumes ClientRetryPluginContext fully describes the client plugin context. If other plugins (like DurableIteratorLinkPlugin) contribute to the context, you may need an intersected/combined context type instead of only ClientRetryPluginContext. Also, if you find this annotation can drift from the actual createORPCClient(link) return type over time, consider deriving the type from the factory’s return type instead of hard‑coding generics.

playgrounds/cloudflare-worker/src/components/chat-room-v2.tsx (2)

1-3: Make React event typing explicit to keep the file self‑contained.

You’re using React.FormEvent<HTMLFormElement> in the handler, but there’s no explicit React type import in this module. Depending on your TS config, the React namespace may or may not be globally available; importing the event type keeps this file robust and clearer.

You can switch to a named type import and drop the React. qualifier:

-import { client, orpc } from '../lib/orpc'
-import { useQuery } from '@tanstack/react-query'
+import { client, orpc } from '../lib/orpc'
+import { useQuery } from '@tanstack/react-query'
+import type { FormEvent } from 'react'
@@
-  const sendMessage = async (e: React.FormEvent<HTMLFormElement>) => {
+  const sendMessage = async (e: FormEvent<HTMLFormElement>) => {

Please confirm your TS setup doesn’t rely on ambient React globals; if it does, this change is still a net clarity win.

Also applies to: 13-17


13-20: Use currentTarget instead of casting target and consider minimal UX/error handling.

To avoid casting e.target and to align with React’s event typing, it’s slightly safer/clearer to use e.currentTarget, which is guaranteed to be the <form> for this handler. You can also (optionally) clear the form or surface send errors.

-  const sendMessage = async (e: FormEvent<HTMLFormElement>) => {
+  const sendMessage = async (e: FormEvent<HTMLFormElement>) => {
     e.preventDefault()
 
-    const form = new FormData(e.target as HTMLFormElement)
+    const form = new FormData(e.currentTarget)
     const message = form.get('message') as string
 
-    await client.messageV2.send({ message })
+    try {
+      await client.messageV2.send({ message })
+      // Optional: reset after successful send
+      // e.currentTarget.reset()
+    } catch (err) {
+      // Optional: log or surface error somewhere in the UI
+      // console.error('Failed to send message', err)
+    }
   }

If you prefer to keep the example minimal, at least the currentTarget change is worth doing.

playgrounds/cloudflare-worker/worker/index.ts (1)

85-85: Consider using a concrete type instead of any.

The messagePublisher is typed as Publisher<Record<string, { message: string }>> in ORPCContext, but here it's instantiated with <any>. Using the concrete type would improve type safety.

Apply this diff:

-    const messagePublisher = new DurablePublisher<any>(env.PUBLISHER_DO)
+    const messagePublisher = new DurablePublisher<Record<string, { message: string }>>(env.PUBLISHER_DO)
packages/publisher-durable-object/src/durable-publisher.ts (1)

30-38: Consider tighter typing for the namespace parameter.

The namespace parameter is typed as DurableObjectNamespace<any>. If the actual Durable Object type is known, using it instead of any would improve type safety.

Example if the DO type is available:

  constructor(
-    private readonly namespace: DurableObjectNamespace<any>,
+    private readonly namespace: DurableObjectNamespace<PublisherDurableObject>,
    { prefix, getStubByName, ...options }: DurablePublisherOptions = {},
  ) {
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between bfd1177 and c1c45b2.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (16)
  • apps/content/package.json (1 hunks)
  • packages/openrpc/package.json (1 hunks)
  • packages/publisher-durable-object/package.json (1 hunks)
  • packages/publisher-durable-object/src/durable-object.ts (1 hunks)
  • packages/publisher-durable-object/src/durable-publisher.ts (1 hunks)
  • packages/tanstack-query/src/procedure-utils.vue.test-d.ts (3 hunks)
  • playgrounds/cloudflare-worker/package.json (1 hunks)
  • playgrounds/cloudflare-worker/src/App.tsx (2 hunks)
  • playgrounds/cloudflare-worker/src/components/chat-room-v2.tsx (1 hunks)
  • playgrounds/cloudflare-worker/src/lib/orpc.ts (2 hunks)
  • playgrounds/cloudflare-worker/worker/dos/publisher.ts (1 hunks)
  • playgrounds/cloudflare-worker/worker/index.ts (5 hunks)
  • playgrounds/cloudflare-worker/worker/orpc.ts (1 hunks)
  • playgrounds/cloudflare-worker/worker/routers/index.ts (2 hunks)
  • playgrounds/cloudflare-worker/worker/routers/message-v2.ts (1 hunks)
  • playgrounds/cloudflare-worker/wrangler.jsonc (2 hunks)
✅ Files skipped from review due to trivial changes (1)
  • packages/openrpc/package.json
🚧 Files skipped from review as they are similar to previous changes (1)
  • packages/publisher-durable-object/src/durable-object.ts
🧰 Additional context used
🧬 Code graph analysis (6)
playgrounds/cloudflare-worker/worker/dos/publisher.ts (1)
packages/publisher-durable-object/src/durable-object.ts (1)
  • PublisherDurableObject (9-66)
playgrounds/cloudflare-worker/src/components/chat-room-v2.tsx (2)
playgrounds/cloudflare-worker/src/lib/orpc.ts (2)
  • orpc (30-30)
  • client (28-28)
playgrounds/cloudflare-worker/worker/routers/message.ts (1)
  • sendMessage (15-22)
playgrounds/cloudflare-worker/src/App.tsx (1)
playgrounds/cloudflare-worker/src/components/chat-room-v2.tsx (1)
  • ChatRoomV2 (4-44)
playgrounds/cloudflare-worker/worker/routers/index.ts (1)
playgrounds/cloudflare-worker/worker/routers/message-v2.ts (2)
  • onMessageV2 (4-6)
  • sendMessageV2 (8-12)
playgrounds/cloudflare-worker/worker/orpc.ts (1)
packages/server/src/builder.ts (1)
  • os (336-352)
playgrounds/cloudflare-worker/worker/routers/message-v2.ts (1)
playgrounds/cloudflare-worker/worker/orpc.ts (1)
  • pub (11-13)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: publish-commit
  • GitHub Check: lint
  • GitHub Check: test
  • GitHub Check: Cloudflare Pages
🔇 Additional comments (18)
apps/content/package.json (1)

42-42: Verify the rationale for using a beta version of the "ai" package.

The "ai" dependency was bumped from a stable release (5.0.76) to a pre-release beta version (6.0.0-beta.85). Using beta versions introduces potential instability and breaking changes. Please confirm:

  1. Is this beta version bump intentional and required by the new durable-object publisher features?
  2. Have you verified compatibility between ai@6.0.0-beta.85 and the pinned AI SDK packages (@ai-sdk/google@2.0.40, @ai-sdk/react@2.0.98)?
  3. Are there tests or CI checks validating this dependency combination?

If this version is essential, consider documenting the rationale in the PR description.

playgrounds/cloudflare-worker/src/lib/orpc.ts (1)

7-9: Type-only import and plugin imports are clean and idiomatic

Using import type for ClientRetryPluginContext while importing the runtime plugins from the same module keeps the bundle tree-shakeable and type-safe; no issues here.

packages/tanstack-query/src/procedure-utils.vue.test-d.ts (1)

70-83: The original review comment is based on incorrect context about the PR's actual objective.

This commit ("dedupe") is a full repository initialization/deduplication, not a targeted PR about adding a Cloudflare Durable Object adapter. The file packages/tanstack-query/src/procedure-utils.vue.test-d.ts was entirely added in this commit (not modified from prior work), and the commented-out test cases with TODO comments are part of the initial implementation.

The TODOs document a known limitation of vue-query: the select option cannot be properly typed inside computed() calls when using useQueries. This is not an accidental inclusion but rather an intentional acknowledgment of a library limitation that the maintainers are aware of. Similar patterns exist in the React Query tests in the same repository with comparable type assertion challenges.

The changes are appropriate for an initial commit establishing test coverage for the Vue Query integration, including tests for supported scenarios and documented limitations.

playgrounds/cloudflare-worker/src/components/chat-room-v2.tsx (1)

4-12: Chat subscription + basic UI wiring looks correct for the playground.

The useQuery(orpc.messageV2.on.experimental_streamedOptions({ context: { retry: Number.POSITIVE_INFINITY } })) usage and the conditional rendering of pending/error/success states are consistent with TanStack Query v5 semantics, and the list rendering from query.data matches the expected shape.

Just double‑check that infinite retry is desired for this playground scenario (it’s fine here, but might not be ideal if reused in a production‑like example).

Also applies to: 22-41

playgrounds/cloudflare-worker/src/App.tsx (1)

7-7: Wiring of ChatRoomV2 into the playground looks good—confirm double chat room is intentional.

The import path and JSX placement look correct, and rendering both <ChatRoom /> and <ChatRoomV2 /> with separators makes sense for comparing implementations in the playground.

Just confirm that having two independent chat UIs on the same page is desired for this demo (both will subscribe/send on the same backend room).

Also applies to: 37-38

playgrounds/cloudflare-worker/worker/orpc.ts (1)

1-16: LGTM!

The context type is properly structured with clear typing for the new messagePublisher property. The integration with existing middleware is clean.

playgrounds/cloudflare-worker/worker/routers/index.ts (1)

3-3: LGTM!

Router wiring for the new messageV2 handlers is straightforward and follows the existing pattern.

Also applies to: 28-31

playgrounds/cloudflare-worker/worker/dos/publisher.ts (1)

1-11: LGTM!

Clean implementation with appropriate resume retention configuration for a playground environment.

playgrounds/cloudflare-worker/worker/index.ts (1)

14-14: LGTM!

The messagePublisher is properly integrated into both RPC and OpenAPI handler contexts, and the PublisherDO export is correctly added.

Also applies to: 98-99, 110-111, 123-123

playgrounds/cloudflare-worker/package.json (1)

18-19: LGTM!

The new experimental publisher dependencies are appropriately added to support the Durable Object publisher functionality.

playgrounds/cloudflare-worker/worker/routers/message-v2.ts (1)

1-12: LGTM!

The message handlers properly implement pub/sub patterns with input validation, signal support for cancellation, and lastEventId for event replay.

playgrounds/cloudflare-worker/wrangler.jsonc (1)

10-12: LGTM!

The Cloudflare Worker configuration correctly adds the PUBLISHER_DO binding, enables request signal support, and includes the necessary migration for the new Durable Object class.

Also applies to: 22-22, 27-27

packages/publisher-durable-object/src/durable-publisher.ts (3)

1-23: LGTM!

The DurablePublisherOptions interface is well-structured with clear documentation for the prefix and getStubByName options.


40-59: LGTM!

The publish method correctly serializes the payload with event metadata and handles HTTP errors appropriately.


61-79: LGTM!

The WebSocket subscription setup correctly handles the lastEventId for event replay and validates the WebSocket response.

packages/publisher-durable-object/package.json (3)

1-15: Package metadata looks good.

The name correctly flags this as experimental, and all standard fields are properly configured for the workspace.


16-30: Export configuration is idiomatic.

The dual-export pattern (TypeScript source during development, built JS in distribution) aligns with TypeScript package best practices and enables proper IDE support without requiring consumers to build from source.


31-47: Build and dependency configuration is sound.

The @cloudflare/workers-types version 4.20251121.0 uses Cloudflare's date-based versioning scheme, and the caret constraint permits appropriate patch and date-based updates. Type checking covers both source and tests, which is a solid practice for this type of package.

@unnoq unnoq force-pushed the feat/publisher/durable-object-adapter branch from c1c45b2 to c8f6eb4 Compare November 27, 2025 04:43
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
packages/publisher-durable-object/src/resume-storage.test.ts (1)

296-314: Timing assertion may be flaky in slow CI environments.

The test verifies that alarmTime falls within 1 second of the expected delay. As noted in a previous review, this tolerance may be insufficient in slow CI environments or under load.

Consider the suggestion from the previous review to increase the tolerance to 5 seconds for more robust testing.

🧹 Nitpick comments (3)
packages/tanstack-query/src/procedure-utils.vue.test-d.ts (1)

70-83: Commented‑out select‑in‑computed cases reduce useQueries type coverage

You’ve worked around the useQueries + computed + select typing issue by commenting out the third query and its expectTypeOf assertions in the .queryOptions, .streamedOptions, and .liveOptions suites. That’s fine as a short‑term unblock, but it silently drops coverage for select in useQueries.

To keep tests meaningful while you chase the underlying bug, consider one of:

  1. Keep select coverage without computed (recommended interim step):
@@
-          // TODO: fix this, cannot define select inside computed
-          // computed(() => optionalUtils.queryOptions({
-          //   select: data => ({ mapped: data }),
-          // })),
+          // TODO: fix this, cannot define select inside computed
+          // Once fixed, wrap this in `computed` again.
+          optionalUtils.queryOptions({
+            select: data => ({ mapped: data }),
+          }),
@@
-      // expectTypeOf(queries.value[2].data).toEqualTypeOf<{ mapped: UtilsOutput } | undefined>()
+      expectTypeOf(queries.value[2].data).toEqualTypeOf<{ mapped: UtilsOutput } | undefined>()
@@
-      // expectTypeOf(queries.value[2].error).toEqualTypeOf<null | UtilsError>()
+      expectTypeOf(queries.value[2].error).toEqualTypeOf<null | UtilsError>()

…and mirror the same pattern for the experimental_streamedOptions and experimental_liveOptions useQueries blocks:

-          // TODO: fix this, cannot define select inside computed
-          // computed(() => streamUtils.experimental_streamedOptions({
-          //   select: data => ({ mapped: data }),
-          // })),
+          // TODO: fix this, cannot define select inside computed
+          streamUtils.experimental_streamedOptions({
+            select: data => ({ mapped: data }),
+          }),
@@
-      // expectTypeOf(queries.value[2].data).toEqualTypeOf<{ mapped: UtilsOutput } | undefined>()
+      expectTypeOf(queries.value[2].data).toEqualTypeOf<{ mapped: UtilsOutput } | undefined>()
@@
-      // expectTypeOf(queries.value[2].error).toEqualTypeOf<null | UtilsError>()
+      expectTypeOf(queries.value[2].error).toEqualTypeOf<null | UtilsError>()
-          // TODO: fix this, cannot define select inside computed
-          // computed(() => streamUtils.experimental_liveOptions({
-          //   select: data => ({ mapped: data }),
-          // })),
+          // TODO: fix this, cannot define select inside computed
+          streamUtils.experimental_liveOptions({
+            select: data => ({ mapped: data }),
+          }),
@@
-      // expectTypeOf(queries.value[2].data).toEqualTypeOf<{ mapped: UtilsOutput[number] } | undefined>()
+      expectTypeOf(queries.value[2].data).toEqualTypeOf<{ mapped: UtilsOutput[number] } | undefined>()
@@
-      // expectTypeOf(queries.value[2].error).toEqualTypeOf<null | UtilsError>()
+      expectTypeOf(queries.value[2].error).toEqualTypeOf<null | UtilsError>()
  1. Or explicitly encode the current limitation by keeping the computed version and marking it with @ts-expect-error to assert that the pattern is currently unsupported rather than just disabling it via comments.

Either approach makes the intent clearer and preserves some type‑level guarantees around select + useQueries, instead of quietly dropping that scenario from the test surface.

Also applies to: 136-149, 202-215

packages/publisher-durable-object/src/resume-storage.test.ts (2)

377-423: Consider refactoring this slow, timing-dependent test.

This integration test has several concerns:

  • Total sleep time of ~10.5 seconds makes it slow
  • Marked with retry: 3, indicating known flakiness
  • Timing-dependent assertions may fail under load

While the test provides valuable coverage of cleanup scenarios, consider these improvements:

  • Split into smaller, focused tests that don't require multiple sleep cycles
  • Use mock timers (e.g., vi.useFakeTimers()) to control time without actual delays
  • If real timing is necessary, reduce the number of scenarios tested in a single case

Example approach with fake timers:

it('cleans up expired events', async () => {
  vi.useFakeTimers()
  const ctx = createDurableObjectState()
  const storage = new ResumeStorage(ctx, { retentionSeconds: 1 })
  
  await storage.store(JSON.stringify({ data: 'event1' }))
  
  vi.advanceTimersByTime(2100) // Fast-forward time
  await storage.store(JSON.stringify({ data: 'event2' }))
  
  // assertions...
  
  vi.useRealTimers()
})

1-425: Consider adding test helper functions to reduce duplication.

Throughout the test suite, there's repeated code for creating contexts and storage instances with common configurations. Extracting helper functions could improve maintainability.

Example:

function createStorageWithRetention(retentionSeconds: number, options?: Partial<ResumeStorageOptions>) {
  const ctx = createDurableObjectState()
  const storage = new ResumeStorage(ctx, { retentionSeconds, ...options })
  return { ctx, storage }
}

// Usage:
const { ctx, storage } = createStorageWithRetention(60)
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c1c45b2 and 86faa6e.

📒 Files selected for processing (2)
  • packages/publisher-durable-object/src/resume-storage.test.ts (1 hunks)
  • packages/tanstack-query/src/procedure-utils.vue.test-d.ts (3 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: lint
  • GitHub Check: test
  • GitHub Check: publish-commit
  • GitHub Check: Cloudflare Pages
🔇 Additional comments (4)
packages/publisher-durable-object/src/resume-storage.test.ts (4)

6-14: LGTM!

Good use of parameterized testing to validate that invalid retentionSeconds values properly disable the storage.


16-117: LGTM!

Comprehensive test coverage for the store method, including lazy initialization, ID generation, meta preservation, custom prefixes, and error recovery. The error recovery test (lines 85-116) appropriately simulates ID overflow to verify the reset mechanism.


119-207: LGTM!

Thorough test coverage for getEventsAfter, including edge cases like ID 0, empty results, and meta property preservation.


317-374: LGTM!

Excellent coverage of schema initialization edge cases, including failure atomicity (schema not created if alarm fails), idempotence, and proper index creation.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
packages/publisher-durable-object/src/resume-storage.test.ts (1)

312-313: Address the flaky timing assertion.

This is the same timing assertion flagged in the previous review. The 1-second tolerance is insufficient for CI environments or systems under load.

Consider applying the previously suggested fix to increase tolerance to 5 seconds:

-      expect(alarmTime).toBeGreaterThanOrEqual(Date.now() + expectedDelay - 1000)
-      expect(alarmTime).toBeLessThanOrEqual(Date.now() + expectedDelay + 1000)
+      expect(alarmTime).toBeGreaterThanOrEqual(Date.now() + expectedDelay - 5000)
+      expect(alarmTime).toBeLessThanOrEqual(Date.now() + expectedDelay + 5000)
🧹 Nitpick comments (2)
packages/publisher-durable-object/src/resume-storage.test.ts (2)

253-272: Consider consistent sleep margins across tests.

This test waits 3 seconds (3x retention time) for a 1-second retention to expire, while the cleanup test at line 410 uses 2.1 seconds (2.1x). While both work, consistent margins would improve test maintainability.

Consider extracting sleep duration calculation to a helper:

const waitForExpiration = (retentionSeconds: number, safetyMargin = 2.1) => 
  sleep(retentionSeconds * 1000 * safetyMargin)

398-444: Consider decomposing this complex timing-dependent test.

This test has multiple concerns that increase fragility:

  • 15 verification steps across ~10.5 seconds of sleep time
  • retry: 3 flag suggests known flakiness
  • Tests multiple scenarios in one test case (cleanup on store, getEventsAfter, alarm, different retention times)

While the coverage is valuable, breaking this into focused tests (e.g., "cleanup on store", "cleanup on getEventsAfter", "cleanup respects retention time") would improve reliability and debugging experience.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 86faa6e and 3f5e971.

📒 Files selected for processing (4)
  • packages/publisher-durable-object/package.json (1 hunks)
  • packages/publisher-durable-object/src/resume-storage.test.ts (1 hunks)
  • packages/publisher-durable-object/src/resume-storage.ts (1 hunks)
  • packages/publisher-durable-object/tests/shared.ts (1 hunks)
✅ Files skipped from review due to trivial changes (1)
  • packages/publisher-durable-object/package.json
🚧 Files skipped from review as they are similar to previous changes (2)
  • packages/publisher-durable-object/tests/shared.ts
  • packages/publisher-durable-object/src/resume-storage.ts
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Cloudflare Pages
🔇 Additional comments (4)
packages/publisher-durable-object/src/resume-storage.test.ts (4)

6-14: LGTM! Good edge case coverage.

The parameterized test effectively covers invalid retentionSeconds values with a clean approach.


85-116: Excellent error handling test.

The simulation of integer overflow by inserting the max int64 value is a realistic and thorough approach to testing the reset logic.


119-207: LGTM! Comprehensive retrieval test coverage.

The tests effectively cover edge cases including ID 0, empty results, and meta property preservation.


317-395: LGTM! Robust initialization tests.

The tests thoroughly cover error scenarios, idempotence, and edge cases like existing alarms and schema reuse across instances.

@unnoq unnoq merged commit e565e67 into main Nov 27, 2025
10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

size:XXL This PR changes 1000+ lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants