Inspiration

When my grandmother passed away last year, I realized how little I knew about her. What her dreams were when she was my age, just sixteen years. What she went through during the Cultural Revolution, unable to continue high school. Despite the piles of notes I had taken about the Cultural Revolution during my AP World class, it was all from the American lens. What I wouldn’t give for her perspective! Every day, voices disappear. Elders pass, languages fade, stories get lost. That made us realize something important: we talk a lot about sustainability, about energy emissions and ecosystems. But rarely do we talk about cultural sustainability. About what happens when the knowledge passed down through generations, in stories, voices, gestures, simply fades away. That’s why we built Echo. Echo is a platform that bridges the gap between languages, communities, and generations. It allows people to share personal oral histories and make them accessible to the world by transcribing and translating them. In doing so, we preserve cultural memory and promote cultural sustainability, ensuring that important knowledge, identity, and traditions are not lost but instead carried forward for future generations. We believe sustainability isn’t just about saving forests or water; it’s about saving wisdom too. With Echo, we are not only preserving stories and translating old oral histories, but we are also building a living archive that will grow stronger over time, a lasting resource for generations to come. That is the epitome of sustainability.

What it does

Echo is a storytelling platform designed to preserve oral histories in an interactive, meaningful way. Users start by entering key details, such as the speaker’s name, age, photo, and their relationship to the person uploading the story, then upload an audio recording or record through the platform. The website generates a transcript in the speaker’s native language, and users can either select AI-generated tags or create their own to categorize the memory. They can also input the date, location, and a summary to further contextualize the story. Once submitted, each story is pinned to an interactive global map and added as a node in the “Explore the World” section, which links related stories by shared themes. The platform features a keyword-based search to help users discover stories by topic, as well as a map-based search. The user will have the option to translate the search results to their language of choice, eliminating any issues with a language barrier. It also provides AI-generated insights like identifying the most popular theme of the week or uncovering common themes across regions.

How we built it

The website utilizes a powerful set of tools and frameworks to implement its key features centered around audio processing and interactive storytelling. One of the primary tools in use is Whisper.js, a JavaScript library that allows for browser-based transcription of audio files. It works alongside a more sophisticated Python-based Whisper service hosted on the server, enhancing the accuracy of transcriptions. This dual approach ensures that audio uplinked by users is effectively converted to text with timestamps, providing a seamless experience. For multilingual support, the application employs ArgosTranslate, an offline translation engine. This service allows users to navigate through a variety of stories in different languages, thus ensuring accessibility across diverse demographics. The combination of caching translated content enhances performance, minimizing load times for users switching between languages. The structure supports multiple languages by utilizing a dataset that includes diverse languages. Further enhancing user interaction, the website leverages React for its frontend framework, specifically using React Query for efficient data fetching and state management. Stories are fetched based on user-selected themes and years, showcasing the dynamic interaction capabilities of the application. The visual representation of stories is facilitated by the Leaflet library, which allows users to explore geographic and thematic relationships between narratives interactively. This combination of powerful libraries and technologies forms the backbone of a user-friendly interface designed for storytelling and cultural exploration.

Challenges we ran into

A major challenge we solved was enabling accurate transcription in any language using that language’s own alphabet. Getting this to work smoothly and accurately every time required us to carefully build and fine-tune a pipeline combining Whisper and NLP models. We tested it across multiple languages, including Ukranian, French, Spanish, and English. Eventually, we refined our code, and we were excited to see it consistently transcribe each audio correctly.

Accomplishments that we're proud of

We’re especially proud of our interactive world map, which pins each voice to its place of origin. This not only brings the stories to life visually, but also shows Echo’s global reach and impact. It reminds users that these are real voices from real places, preserving knowledge that would otherwise be lost forever.

What we learned

We learned how to:

  1. Use Whisper locally to convert speech to text
  2. Integrate the Leaflet API to visualize geographic data
  3. Employ ArgosTranslate to allow everyone to read the same information, no matter the language
  4. Build a user-friendly interface to make all of this accessible to anyone

What's next for Echo

Our next goal is to automate the integration of existing archives and old audio clips, making them accessible through Echo’s platform. Eventually, we hope Echo will become a global cultural preservation tool that protects the past while inspiring future generations to share, learn from, and sustain their own stories.

Built With

Share this project:

Updates