Hey, this one was neat but the whole 'voice control' thing didn't really pan out. Go see the version at https://github.com/xlabCU/historicalfriction instead.
History is all around us. The voices of the past thicken the air, calling out for your attention. When it all gets too much, pull the ear-buds out, stop, and look at where you are with fresh eyes, in the new silence...
Historical Friction is a web app that makes physical space thick by auralizing the digital data streams it finds. As you walk, the app discovers nearby Wikipedia articles via GeoNames and renders them as sound — overlapping voices, ambient drones, generative music, or ghostly whispers.
You can also talk back to the past using on-device voice recognition (powered by Moonshine JS), asking questions like "what's here?" or "tell me about the river" to filter and explore the historical layers around you. (still glitchy)
| Mode | Description |
|---|---|
| Voices | The classic: each article is spoken aloud at a random pitch via the Web Speech API. Multiple articles overlap, creating a cacophony of the past arguing with itself. |
| Drone | Articles become ambient tones via the Web Audio API. Distance maps to frequency, summary length to volume, and compass bearing to stereo panning. Dense historical areas hum; empty spaces go quiet. |
| Music | Articles map to notes on a pentatonic scale. Distance sets the pitch, word count sets the rhythm. Walking composes a unique generative piece. |
| Whisper | TTS at very low volume and slow rate — ghostly, half-intelligible murmurs. You catch fragments. The past as half-heard rumor. |
Tap the microphone button to enable voice interaction. Speech recognition runs entirely in your browser via Moonshine JS (no cloud, no data leaves your device).
| Say... | Effect |
|---|---|
| "What's here?" | Hear the closest article clearly |
| "Tell me about [topic]" | Filter articles matching your keyword |
| "Clear filter" / "Show all" | Remove the keyword filter |
| "Silence" / "Stop" | Pause all audio |
| "Play" / "Resume" | Start audio again |
| "Voices" / "Drone" / "Music" / "Whisper" | Switch sonification mode |
| "Louder" / "Softer" | Adjust volume |
| "More" / "Closer" | Expand or contract the search radius |
| "What am I hearing?" | Name the currently sounding articles |
npx serve .Then open http://localhost:3000. You'll need to allow location access and (for voice commands) microphone access.
- Geolocation:
navigator.geolocation.watchPosition()tracks your movement - Data: GeoNames API finds nearby Wikipedia articles
- TTS: Native
speechSynthesisWeb Speech API (Voices and Whisper modes) - Sonification: Web Audio API oscillators, filters, panners, and reverb (Drone and Music modes)
- Voice recognition: Moonshine JS — on-device ASR via ONNX Runtime Web
- Image detection: Wikipedia API checks which articles need photos (highlighted in the UI)
When a user lands on Historical Friction, selects "Drone", and hits "Play", the app initiates a complex chain of data retrieval and digital signal processing (DSP).
Here is the step-by-step lifecycle of that interaction:
As soon as the user interacts, the browser’s Geolocation API is triggered.
- Data Pulled: The phone’s GPS coordinates (Latitude/Longitude).
- Transformation: These coordinates are formatted into an API request sent to Wikipedia's Geosearch API.
- Result: Wikipedia returns a JSON list of the nearest historical articles (e.g., "The Old Town Hall," "Site of 1842 Riot"). Each item includes its own Lat/Lon and a snippet of text.
Browsers block audio from playing automatically. The user’s "Play" tap is the "User Gesture" required to unlock the Web Audio API.
- Transformation: The
AudioContextis instantiated. - The Master Chain: The app builds a virtual "mixing desk" in the phone's memory:
- Dynamics Compressor: To prevent the "cacophony" from distorting or blowing out the phone's tiny speakers.
- Master Gain: The volume fader.
- Convolver (Reverb): The app generates a 3-second "Impulse Response" (a mathematical model of a hall) using random white noise that decays exponentially. This gives the sounds a "ghostly" physical space.
While the local articles are loading, the app opens a Server-Sent Events (SSE) connection to stream.wikimedia.org.
- Data Pulled: A real-time "heartbeat" of every edit happening on Wikipedia globally (thousands per minute).
- Transformation: The app ignores the text of the edits and looks only at the "Delta" (how many characters were added or removed). This is mapped to a variable called
_globalFriction.
For every Wikipedia article found nearby, a unique "Voice" is created in the sonification.js engine.
- Panning (Left to Right): The app compares the user's GPS heading to the article's GPS location.
- Math: If the article is at a 90° angle to the user, the
StereoPannerNodeshifts that voice entirely to the right ear.
- Math: If the article is at a 90° angle to the user, the
- Pitch (Distance):
- Logic: Articles 10 meters away are mapped to a low, heavy 60Hz thrum. Articles 500 meters away are mapped to a thinner 220Hz hum. This creates a "gravity" effect—the closer you are to a site, the "heavier" the air feels.
- Timbre (Waveform):
- Transformation: The app takes the Article Title (e.g., "Market Square") and generates a Hash (a number derived from the text).
- Result: This number picks the waveform: "Market Square" might be a
sinewave (smooth), while "Bloody Sunday" might be asawtoothwave (harsh/buzzing).
Now the "Drone" is running, but it isn't static. It is being modulated by the global edit stream in real-time.
- The Modulation Loop: 60 times per second (
requestAnimationFrame), the app checks the_globalFrictionlevel. - The Transformation:
- If someone in Japan just deleted 5,000 words from a Wikipedia page,
_globalFrictionspikes. - The Low-Pass Filters on all your drones "open up."
- Result: The user hears the background drone suddenly become bright, fizzy, and intense for a second, before "cooling down" (decaying) back into a muffled thrum.
- If someone in Japan just deleted 5,000 words from a Wikipedia page,
The user stands in a park. They hear a heavy, low-frequency buzz in their left ear (a nearby monument) and a hollow, smooth whistle in their right ear (a distant church). As they stand there, the entire soundscape "breathes" and shimmers in brightness—an auditory ghost of the global friction of people writing history in real-time.
Built on ici by Ed Summers. The original idea for "historical friction" is detailed at Electric Archaeology.
Shawn Graham & Stuart Eve created the original speak.js-powered version. This modernized version replaces the 2013-era stack (jQuery, Bootstrap 2, CoffeeScript, Emscripten-compiled eSpeak) with vanilla ES6+, Web Audio API, Web Speech API, and Moonshine JS.
To the extent possible under law, the authors have waived all copyright and related rights to this work.
