fix(perf): debounce storage writes, batch events, async health checks#497
Conversation
|
Important Review skippedAuto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Repository UI Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Every keystroke triggered full JSON serialization of all composer drafts (including base64 image attachments) and a synchronous localStorage write. At normal typing speed this caused 5+ writes/sec, blocking the main thread and creating noticeable input lag. Wrap the Zustand persist storage with a 300ms debounce. In-memory state updates remain immediate; only the serialization and storage write are deferred. A beforeunload handler flushes pending writes to prevent data loss. The removeItem method cancels any pending setItem to avoid resurrecting cleared drafts. Adds unit tests for the DebouncedStorage utility covering debounce timing, rapid writes, removeItem cancellation, flush, and edge cases.
The useStore subscriber called persistState on every state mutation, triggering JSON.stringify + localStorage.setItem synchronously. It also ran 8 localStorage.removeItem calls for legacy keys on every fire. Wrap the subscriber with a 500ms debounce so rapid state changes batch into a single write. Move legacy key cleanup behind a one-time flag so it runs only once per page load. Add a beforeunload handler to flush the final state.
During active sessions, every domain event triggered a full syncSnapshot (IPC fetch + state rebuild + React re-render cascade) and sometimes a provider query invalidation. Events fire in rapid bursts during AI turns. Replace per-event processing with a throttle-first pattern: schedule a flush on the first event, absorb subsequent events within a 100ms window, then sync once. Provider query invalidation is batched via a flag. Since syncSnapshot fetches the complete snapshot, no events are lost by skipping intermediate syncs.
The ProviderHealth layer blocked server startup with two sequential CLI spawns (codex --version + codex login status), each with a 4-second timeout, delaying startup by up to 8 seconds. Run health checks in the background via Effect.runPromise so the layer resolves immediately with a placeholder status. Add an onReady callback to ProviderHealthShape so wsServer can push the resolved statuses to connected clients once checks complete, preventing early-connecting clients from showing "Checking..." indefinitely.
|
This resolves my issue #440 Thank you! |
|
@t3dotgg could u merge this please, some performance fixes |
There was a problem hiding this comment.
Some fixes:
commit 6a38e3562d452605ee23072c6a8f7e9e321360d1
Author: Tim Smart <hello@timsmart.co>
Date: Mon Mar 9 15:15:51 2026 +1300
cleanup
diff --git a/apps/server/src/provider/Layers/ProviderHealth.ts b/apps/server/src/provider/Layers/ProviderHealth.ts
index 41fc382a..4eb6a288 100644
--- a/apps/server/src/provider/Layers/ProviderHealth.ts
+++ b/apps/server/src/provider/Layers/ProviderHealth.ts
@@ -13,7 +13,7 @@ import type {
ServerProviderStatus,
ServerProviderStatusState,
} from "@t3tools/contracts";
-import { Effect, Layer, Option, Result, Stream } from "effect";
+import { Array, Effect, Fiber, Layer, Option, Result, Stream } from "effect";
import { ChildProcess, ChildProcessSpawner } from "effect/unstable/process";
import {
@@ -312,56 +312,13 @@ export const checkCodexProviderStatus: Effect.Effect<
export const ProviderHealthLive = Layer.effect(
ProviderHealth,
Effect.gen(function* () {
- const spawner = yield* ChildProcessSpawner.ChildProcessSpawner;
- let cachedStatuses: ReadonlyArray<ServerProviderStatus> = [
- {
- provider: CODEX_PROVIDER,
- status: "warning",
- available: false,
- authStatus: "unknown",
- checkedAt: new Date().toISOString(),
- message: "Checking Codex CLI availability...",
- },
- ];
-
- let readyListeners: Array<(statuses: ReadonlyArray<ServerProviderStatus>) => void> = [];
- let resolved = false;
-
- const notifyReady = (statuses: ReadonlyArray<ServerProviderStatus>) => {
- resolved = true;
- cachedStatuses = statuses;
- for (const cb of readyListeners) cb(statuses);
- readyListeners = [];
- };
-
- // Run health checks in the background so they don't block server startup.
- checkCodexProviderStatus.pipe(
- Effect.provideService(ChildProcessSpawner.ChildProcessSpawner, spawner),
- Effect.runPromise,
- ).then((status) => {
- notifyReady([status]);
- }).catch(() => {
- notifyReady([
- {
- provider: CODEX_PROVIDER,
- status: "error",
- available: false,
- authStatus: "unknown",
- checkedAt: new Date().toISOString(),
- message: "Failed to check Codex CLI status.",
- },
- ]);
- });
+ const codexStatusFiber = yield* checkCodexProviderStatus.pipe(
+ Effect.map(Array.of),
+ Effect.forkScoped,
+ );
return {
- getStatuses: Effect.sync(() => cachedStatuses),
- onReady: (cb) => {
- if (resolved) {
- cb(cachedStatuses);
- } else {
- readyListeners.push(cb);
- }
- },
+ getStatuses: Fiber.join(codexStatusFiber),
} satisfies ProviderHealthShape;
}),
);
diff --git a/apps/server/src/wsServer.ts b/apps/server/src/wsServer.ts
index 2f7ed0bc..dacf66f0 100644
--- a/apps/server/src/wsServer.ts
+++ b/apps/server/src/wsServer.ts
@@ -26,6 +26,7 @@ import {
WebSocketRequest,
WsPush,
WsResponse,
+ ServerProviderStatus,
} from "@t3tools/contracts";
import * as NodeHttpServer from "@effect/platform-node/NodeHttpServer";
import {
@@ -268,9 +269,6 @@ export const createServer = Effect.fn(function* (): Effect.fn.Return<
),
);
- // Read provider statuses lazily so background health checks are reflected.
- const getProviderStatuses = () => Effect.runSync(providerHealth.getStatuses);
-
const clients = yield* Ref.make(new Set<WebSocket>());
const logger = createLogger("ws");
@@ -618,6 +616,23 @@ export const createServer = Effect.fn(function* (): Effect.fn.Return<
const subscriptionsScope = yield* Scope.make("sequential");
yield* Effect.addFinalizer(() => Scope.close(subscriptionsScope, Exit.void));
+ // Push updated provider statuses to connected clients once background health checks finish.
+ let providers: ReadonlyArray<ServerProviderStatus> = [];
+ yield* providerHealth.getStatuses.pipe(
+ Effect.flatMap((statuses) => {
+ providers = statuses;
+ return broadcastPush({
+ type: "push",
+ channel: WS_CHANNELS.serverConfigUpdated,
+ data: {
+ issues: [],
+ providers: statuses,
+ },
+ });
+ }),
+ Effect.forkIn(subscriptionsScope),
+ );
+
yield* Stream.runForEach(orchestrationEngine.streamDomainEvents, (event) =>
broadcastPush({
type: "push",
@@ -632,23 +647,11 @@ export const createServer = Effect.fn(function* (): Effect.fn.Return<
channel: WS_CHANNELS.serverConfigUpdated,
data: {
issues: event.issues,
- providers: getProviderStatuses(),
+ providers,
},
}),
).pipe(Effect.forkIn(subscriptionsScope));
- // Push updated provider statuses to connected clients once background health checks finish.
- providerHealth.onReady((statuses) => {
- broadcastPush({
- type: "push",
- channel: WS_CHANNELS.serverConfigUpdated,
- data: {
- issues: [],
- providers: statuses,
- },
- }).pipe(Effect.runPromise).catch(() => {});
- });
-
yield* Scope.provide(orchestrationReactor.start, subscriptionsScope);
let welcomeBootstrapProjectId: ProjectId | undefined;
@@ -896,7 +899,7 @@ export const createServer = Effect.fn(function* (): Effect.fn.Return<
keybindingsConfigPath,
keybindings: keybindingsConfig.keybindings,
issues: keybindingsConfig.issues,
- providers: getProviderStatuses(),
+ providers,
availableEditors,
};
- Drop the unused `onReady` hook from `ProviderHealthShape` - Keep startup health status access focused on `getStatuses`
- Replace manual timeout debounce logic with `@tanstack/react-pacer`'s `Debouncer` - Persist updates via `maybeExecute` to reduce localStorage write thrashing - Flush pending persistence on `beforeunload` to avoid losing recent state
- Replace manual timeout-based domain event batching with `Throttler` - Keep provider query invalidation batched with trailing 100ms flushes - Cancel throttler and reset invalidation flag during EventRouter cleanup
- Replace manual timeout/pending-value debounce logic with `@tanstack/react-pacer` `Debouncer` - Keep `removeItem`/`flush` behavior while simplifying and standardizing persistence timing
|
@juliusmarminge thanks |
What
Address multiple performance bottlenecks causing multi-second input lag, slow view transitions, and delayed startup on desktop v0.0.4.
Why
Issue #440 reports severe desktop performance on a MacBook Pro M3 Max: minutes-long startup, 10-15 second view transitions, and ~5 second chat input lag. Root cause analysis identified four independent bottlenecks, all platform-agnostic except health check timing.
Key Changes
Includes unit tests for the DebouncedStorage utility.
Related to #440
Note
Debounce localStorage writes, batch domain events, and run health checks async
store.tsby 500ms and composer draft persistence incomposerDraftStore.tsby 300ms; both flush pending writes onbeforeunload.__root.tsx.ProviderHealth.ts, so layer construction no longer blocks on status checks.wsServer.ts.serverGetConfigreturns an empty providers list until health checks finish, and domain event side effects are delayed up to 100ms.Changes since #497 opened
Macroscope summarized f9dab13.