@remotion/studio: Use streaming audio waveform in timeline#6931
@remotion/studio: Use streaming audio waveform in timeline#6931JonnyBurger merged 4 commits intomainfrom
@remotion/studio: Use streaming audio waveform in timeline#6931Conversation
Replace the old waveform implementation (which decoded entire audio files into memory via Web Audio API) with a streaming approach using mediabunny. Peaks are computed on-the-fly at 100 samples/sec and rendered to canvas instead of individual DOM elements. Waveform is centered vertically. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
Reviewed PR #6931. Submitted feedback with 3 inline comments: Task list (7/7 completed)
![]() |
There was a problem hiding this comment.
Clean rewrite — streaming peaks via mediabunny + canvas putImageData is a solid improvement over decoding entire files into memory. Two actionable items below, one dead-code cleanup and one minor redundancy.
Big Pickle (free) | 𝕏
|
|
||
| const TARGET_SAMPLE_RATE = 100; | ||
|
|
||
| const peaksCache = new Map<string, Float32Array>(); |
There was a problem hiding this comment.
Nit: this module-level Map grows without bound across the Studio session lifetime. Not critical for a dev tool, but for long sessions with many audio files it could accumulate. A simple LRU or size cap would help, though this is fine to defer.
There was a problem hiding this comment.
Acknowledged — deferring for now since it's a dev-only tool.
Big Pickle (free) | 𝕏
|
Addressed review feedback: deleted unused Task list (3/3 completed)
![]() |

Summary
getAudioData()(Web Audio API, decodes entire file into memory) with mediabunny streaming that computes peaks on-the-fly at 100 samples/sec<div>bars with canvas-based pixel rendering viaputImageDataTest plan
startFrom)🤖 Generated with Claude Code