A sensor spikes. An engine runs hot. A building's energy use surges. A server degrades. A cold chain breaks.
winkComposer turns live data streams into clear, actionable insights by composing small, focused building blocks — signal conditioning, anomaly detection, and health assessment — into pipelines underpinned by neural-network intelligence.
A high-performance JavaScript framework for IIoT and beyond. Runs on a Raspberry Pi or a production server or k8 cluster on cloud. Purpose-built for SMBs and MSMEs. Integrates with QuestDB, Grafana, and Mosquitto — open source, end to end.
winkComposer calls its building blocks nodes — each with a single responsibility, wired through a declarative flow language. Small vocabulary, unbounded composition — the same nodes that detect bearing wear also catch server latency degradation and process yield drift.
Note
The documentation site is live and growing — interactive demos run real winkComposer nodes in your browser, no installation or sign-up required. winkComposer is transitioning to open source; the repository and full source will follow as development progresses. MCP integration for AI-driven queries over pre-computed insights is in active development.
Real data. Real insight. Each demo runs the same winkComposer core that powers edge-to-cloud deployments — right here in your browser.
![]() |
![]() |
![]() |
|---|---|---|
| Detecting Bearing Failure | Detecting Server Latency Degradation | Catching Process Drift |
| Predictive Maintenance | AIOps | Process Control |
| Resource | What it covers |
|---|---|
| Hello Flow! | Build a 4-node temperature monitor from scratch — smooth, detect, confirm, broadcast — with an interactive demo running real nodes in your browser. The natural starting point. |
| Recipes | Focused, runnable patterns for common detection problems: gradual drift (fast/slow esMean crossover with Page-Hinkley), sudden shifts (Kalman filter), sensor freeze (collapsed standard deviation), and subtle process shifts (Western Electric run rules). Each runs in the browser. |
| Explore Nodes | Single-node sandboxes — drag a slider, watch the node respond in real time. Covers the Kalman 1D filter and the kernel convolution node, with more to come. |
| Under the Hood | How messages flow and get enriched node by node, how bad data and throwing functions are handled without crashing the pipeline, how per-asset isolation works, and timestamp requirements. |
| Flow Language | The complete DSL reference — flow anatomy, node call signatures, dynamic options (tunables), single vs. multi-field processing, naming policies, and node processing types. |
| Composition Patterns | Proven node combinations for recurring problems: noise-tolerant alarms, drift detection, adaptive diagnostics, layered flows, and downsampling for storage. Includes clear guidance on when to use passIf, emitIf, or controller/disable. |
| Semantics | How to define what computed values mean — types, units, physical ranges, operational limits — as a single source of truth shared by storage, dashboards, and query engines. Covers the facts-vs-decisions design principle. |
| Node Index | Every node grouped by category — Signal Conditioning, Detection, Feature Extraction, Intelligence, and more — with what each computes and what it adds to the message. |
A pure compute benchmark — every step a live message takes, from arrival through all 8 nodes to final computed output, with no I/O. Asset pipelines are created dynamically as each new asset is first encountered; the benchmark runs 10 such pipelines concurrently, interleaved in random order to reflect real multi-asset deployment. Measured with process.hrtime.bigint() across 4.5 million messages (10 pipelines × 900 data points × 500 iterations). Storage and MQTT I/O are excluded.
| Configuration | Throughput |
|---|---|
| Raspberry Pi 5 | ~100K messages/second |
| Modern server | ~1.2M messages/second |
| Tracking 200K assets | ~300K messages/second |
The same pipeline runs in your browser — browser results are typically 30–60% of native Node.js throughput due to JIT differences.
Each asset runs in its own isolated state — a fault in one never affects another. Messages queue locally when the network drops and drain cleanly on reconnect. The same pipeline code runs unchanged from edge to cloud. Shutdown is ordered and deterministic: sources close first, storage last, with no data corruption.
| Star winkComposer | Follow @winkjs | Discussions |
|---|---|---|
| Support open-source streaming intelligence. | Stay updated on releases and ecosystem developments. | Questions, ideas, or feedback — all welcome. |
winkJS is the open-source home of two high-performance, production-grade tools — built from first principles, tested to near-perfection, and trusted by thousands of projects worldwide.



