Skip to content

Feat: Batch Streaming & Progressive Rendering for Proxy Loader #9094

@tobiu

Description

@tobiu

Feat: Batch Streaming & Progressive Rendering for Proxy Loader

Objective

Optimize the Neo.data.proxy.Stream and Neo.data.Store integration to solve the performance regression caused by single-record event firing. Implement chunked streaming and progressive UI updates.

Problem

The initial implementation fired a data event for every single record in the NDJSON stream. For 11k records, this caused massive overhead (11k events, 11k store updates, 11k microtasks), blocking the App Worker for ~10s.

Solution

  1. Batching (chunkSize): Update Stream.mjs to buffer parsed records and fire the data event only when chunkSize (e.g., 500) is reached.
  2. Progressive Rendering: Update Store.mjs to remove suspendEvents during the stream. Instead, listen for the chunked data events, add the chunk to the store, and fire a load event immediately. This allows the Grid to render the first chunk (~500 rows) almost instantly while the rest of the stream continues in the background.

Tasks

  • Add chunkSize config to Neo.data.proxy.Stream.
  • Implement buffering logic in Stream.read() to yield arrays of records.
  • Refactor Neo.data.Store.load() to support progressive loading (remove suspendEvents, fire intermediate load events).
  • Update Unit Tests (Stream.spec.mjs, StoreProxy.spec.mjs).

Expected Outcome

  • Time to First Byte (TTFB): Unchanged.
  • Time to First Render (TTFR): Drastically reduced (O(chunkSize) instead of O(Total)).
  • UX: User sees the grid populate immediately. Scrollbar grows as data streams in.

Metadata

Metadata

Assignees

Labels

aicoreCore framework functionalityperformancePerformance improvements and optimizations

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions