Inspiration

  • One of our team member's grandfather had a stroke, and they had a hard time communicating with their loved ones and caretakers. Without the proper resources, the ability to communicate is hindered.
  • We wanted to give everyone who has been through such a traumatic incident a chance to have a voice and be able to COMMUNICATE with people and express their emotions.

What it does

  • The app offers a visual, tap-based communication board, natural text-to-speech output, interactive speech therapy modules, and AI phrase prediction. It enables users to express their needs, practice speech, and rebuild confidence through our simple and accessible interface.
  • With features like real-time speech feedback, personalized phrase suggestions, cognitive-language therapy, and caregiver/therapist support, NeuroSpeak empowers users to reconnect with their loved ones, participate in conversations, and take control of their recovery.

How we built it

  • We used React to build the frontend. For the backend, we used FastAPI+Uvicorn through Python, Google Cloud API platforms, and FireStore for the database. To connect frontend to backend we used Axios which is a library to get Fast API requests to React Native.

Challenges we ran into

  • The challenge we ran into was connecting frontend to backend.
  • We initially wanted to add a "sign in with Google" option, but unfortunately, it was difficult to incorporate without changing the original sign-in or sign-up options, so we decided to come back to that feature if we had time after we completed everything else.
  • We accidentally git-committed a file that we were supposed to git-ignore which made a private file public, and we had to delete our current working service account, create a new one, and generate a new API key to call the Google Cloud platforms.

Accomplishments that we're proud of

  • We are really proud that we were able to complete 90% of the project in the first 12 hours of the hackathon.

What we learned

  • We learned how to create a full-stack AI-assisted application, combining a React frontend with a FastAPI backend, and integrating with Google Cloud Firestore for real-time data storage.
  • We implemented OAuth 2.0 authentication, enabling secure login via email while managing user sessions and storing user profiles in Firestore.
  • We explored AI-based phrase prediction and text-to-speech (TTS) systems to enhance accessibility and learn how to handle speech generation and correction feedback.
  • We developed feedback, allowing users to confirm or reject speech outputs, with reprocessing logic in the backend to improve accuracy over time.
  • We learned to debug and resolve issues like missing modules, misconfigured paths, or invalid Firestore writes — gaining deeper experience with dependency management and cloud services.

What's next for NeuroSpeak

In the future we would like to add features to our app so that it has:

  • Multilingual support
  • Speech therapist dashboard: The caretaker and/or speech therapist can log there findings on the app dashboard.

Built With

  • cloudfirestoneapi
  • cloudfunctionsapi
  • cloudloggingapi
  • cloudmonitoringapi
  • cloudstorageapi
  • cloudvisionapi
  • css
  • fastapi
  • firestoreapi
  • geminiapi
  • googlecloudplatform
  • html
  • iam-service-account-credentials-api
  • javascript
  • python
  • react-native
  • tailwind
  • vertexai-api
  • vite
Share this project:

Updates