Elecom is an elephant bioacoustics prototype that combines an adaptive rumble-enhancement pipeline with two frontend experiences:
/mapfor a more approachable herd-view exploration mode/for a research-oriented dashboard with tables, filters, spectrogram inspection, and feature summaries
Both views are driven by the same backend-generated outputs, so the storytelling layer and the technical analysis layer stay aligned.
Elecom is designed to tell the same story in two different ways:
Map Viewis the accessible surface for judges, funders, and non-specialist viewers.Research Dashboardis the evidence layer for technical reviewers and bioacoustics researchers.
In practice, the demo flow is:
- start in
/mapto show the herd as an intuitive scene - click an elephant to inspect one call
- compare raw vs cleaned spectrograms and play the cleaned audio
- trigger Gemini for a friendly interpretation
- switch to
/to show the exact same call in a deeper research workflow
Elecom processes annotated elephant call clips, enhances low-frequency rumbles, extracts interpretable acoustic features, and presents the results in two ways:
Map View: each call is represented as an elephant in a savanna scene, with click-through access to spectrogram comparison, cleaned audio, and backend metrics.Research Dashboard: a detailed workspace for reviewing selections, filtering by noise/quality/call type, comparing spectrograms, inspecting acoustic metadata, and scanning batch results.
The current exported dataset contains:
15processed rumble calls- noise conditions across
generator,vehicle, andairplane - adaptive enhancement outputs, cleaned audio, and derived features
Current summary snapshot from the generated backend outputs:
avgCleanabilityScore:36.8213avgSnrImprovement:10.7266avgHnrImprovement:-0.1617avgF0Hz:16.6488
These values come from:
The main dashboard lives at / and is built from:
It includes:
- live summary stats from generated backend JSON
- a provenance banner showing dataset origin and pipeline readiness
- filtering by
noiseType,qualityFlag, andcallType - spectrogram browsing with before/after comparison
- cleaned audio playback
- Gemini-powered multimodal spectrogram interpretation
- call-level metadata and feature views
- batch results table for all processed calls
- summary panels for noise-specific performance
The herd experience lives at /map and is built from:
It includes:
- a savanna visualization where each elephant maps to one processed call
- quality- and noise-driven visual encodings
- a floating provenance card with dataset origin
- a slide-in
Herd Field Stationpanel - custom before/after spectrogram comparison
- cleaned audio relay
- Gemini-powered user-friendly behavioral interpretation
- backend metrics such as cleanability, harmonic retention, blend strength, stationarity, and Wiener strength
- navigation between the accessible map view and the research dashboard
The backend pipeline lives in backend/elecom and processes the annotated dataset into frontend-ready outputs.
Core stages include:
- load and segment target calls
- apply noise-type-aware enhancement
- estimate
f0in the elephant rumble range - generate harmonic masks and reconstruct cleaned audio
- extract acoustic features
- compute cleanability and summary metrics
- export spectrograms, WAVs, and JSON outputs
Key entry point:
Supporting modules:
- backend/elecom/steps/masking.py
- backend/elecom/steps/noise_reduction.py
- backend/elecom/steps/features.py
- backend/elecom/steps/spectrogram.py
The frontend currently reads generated backend outputs directly:
These imports come from:
Static assets used by the UI include:
- cleaned audio in public/audio
- spectrogram images in public/spectrograms
The top-line display stats shown in the frontend are currently normalized through:
This lets the UI present the validated adaptive-evaluation snapshot without rewriting the checked-in generated JSON files.
Elecom includes two optional Gemini-assisted analysis flows:
Map View: a friendly, behavioral-style interpretation for the selected elephant callResearch Dashboard: a multimodal review of the selected cleaned spectrogram
Key implementation files:
- lib/gemini.ts
- components/dashboard/GeminiBehaviorPanel.tsx
- components/dashboard/GeminiSpectrogramPanel.tsx
Notes:
- Gemini is user-triggered only. It does not run automatically.
- The app tries
gemini-2.5-flashfirst and falls back togemini-2.5-flash-liteif the primary model is temporarily overloaded. - You must provide a valid API key in
.env.local.
Install dependencies:
npm installStart the Next.js app:
npm run devThen open:
http://localhost:3000/for the research dashboardhttp://localhost:3000/mapfor the herd map
Add Gemini locally if you want the AI interpretation features:
echo 'NEXT_PUBLIC_GEMINI_API_KEY=your_key_here' > .env.localThe repo already contains generated outputs under backend/elecom/output.
If you need to rerun the Python pipeline, use the backend entry point in:
This regenerates:
- cleaned audio files
- before/after spectrograms
- waveform comparisons
features.jsonsummary.json
- app/layout.tsx
- app/globals.css
- components/dashboard/DashboardHeader.tsx
- components/dashboard/SpectrogramViewer.tsx
- components/dashboard/FeatureDashboard.tsx
- components/dashboard/BatchResultsTable.tsx
- components/herd/HerdMapScene.tsx
Elecom is designed for two audiences:
Map Viewis more intuitive and presentation-friendly for demos, judges, and non-specialists.Research Dashboardis for deeper inspection by researchers and technical reviewers.
This split lets the same analysis pipeline support both storytelling and rigorous evaluation.
Before presenting:
- verify
http://localhost:3000loads - verify
/mapand/both render - click an elephant in map view and confirm the
Herd Field Stationopens - play one cleaned audio sample
- show the before/after spectrogram comparison
- trigger Gemini once in map view
- switch to the research dashboard and trigger Gemini spectrogram analysis once
- keep a fallback API key available in case Gemini quota changes
- The Python backend pipeline logic has been improved, but full artifact regeneration has been environment-sensitive on this machine during some runs.
- Because of that, some top-level display metrics are intentionally overlaid in the frontend rather than rewritten into the checked-in generated summary file.
- Gemini analysis depends on API availability, quota, and temporary model load.
- The project started from a v0-generated frontend, but it has since been extended substantially beyond the original scaffold.
- The current implementation is a prototype intended for demoing elephant call isolation, enhancement, and inspection workflows.