-
One of the initial attempts at using AI to translate and display cultural context
-
Translation service with various supported languages
-
Designing a nice UI
-
A look into the About page promoting our services
-
Tourists may also find the nearest restaurants, hotels, and restroom! And yes, we have light mode!
-
History of pictures from the user
Inspiration
Are you tired of accidentally eating repugnant food while visiting a foreign country or finding bugs in your cheese? Are you constantly annoyed at having to decipher cryptic writing when you travel the world? What about hard-to-read road signs that are too confusing?
Don't worry, we got you! Maigoro is a handy tool designed to make sure that you know exactly what you are looking at: Just take a picture and we'll provide you with descriptions, translations to a language of your choice, as well as some cultural context of whatever you may bite into!
What it does
Maigoro ( /ˈmaɪ-goʊ-roʊ/, also MY-go-roh ) is the personal guru that spiritually guides one, connecting one's mind with wisdom and offering help to one's inner and outer journey by giving a deeper insight to the world. Being able to help one in a moment of doubt, Maigoro will allow one to reach a deeper level of understanding of their environment by offering translation services just by the snap of a photo to instantly translate into multiple languages, giving cultural significance and context through detailed insights about cultural artifacts, symbols, or phrases to enhance understanding.
How we built it
Maigoro was built using a Python/Flask backend and a SvelteKit frontend. The backend handles API requests, processes image data, and integrates machine learning models for analyzing landmarks and providing cultural insights. On the frontend, SvelteKit powers a responsive and dynamic interface, offering fast interactions and a seamless user experience.
Challenges we ran into
"ALL. Everything. we ran into everything, bro ... yawwwwwwn... im passing out... sigh.."
Main challenges would revolve around getting the OpenAi calls to return legitimate responses, integrating the frontend with some backend. We also had quite a few headaches when it came to some of the frontend components, especially some of the finer styling, which was relatively annoying to work out.
Accomplishments that we're proud of
Being able to recognize text and to translate it was a big accomplishment in the back-end. Filtering the words that were detected was crucial to give the user cultural context that was relevant and informative.
What we learned
Having a seamless user experience can be challenging to do. A fast and reliable service can make the back-end look deceptively easy and simple. Striving for such ideals can be difficult considering the amount of tools needed for the project.
To Infinity and Beyond
This project can be expanded by developing hardware that people with disabilities can use such as wearable devices that will scan the surroundings of one's environment and communicate it to the user. These devices will scan and interpret the environment, providing auditory or tactile feedback to users, enabling a more connected and accessible experience.
Built With
- camera
- css
- flask
- google-cloud
- google-vision
- html
- javascript
- paint
- pip
- python
- svelte
- visual-studio-code
Log in or sign up for Devpost to join the conversation.