Inspiration

This was inspired by a general lack of willingness to do math. Despite having taken calculus and linear algebra, splitting the bill is always a bit of a headache when we go out together. We wanted to create an app that took your mind off calculating the different totals and instead, facilitated a conversation around who got what. Then at the end, the app takes care of the math and sends out a message to each person with the amount they owe.

How we built it

We split up the tasks and focused separately on creating a great UI and on implementing the functionality. The UI can always be programmed using CSS and JavaScript, but we wanted to hack together a user experience that was so easy that it would keep users coming back. The UI prototype was built in Adobe XD. To implement the text recognition, we made an API call to google-vision and submitted our receipt image, which returned a JSON with the items and prices. But it was mismatched, so we wrote an algorithm to parse through the text and match items to prices. The app is built is built in React Native using Expo, so it would be cross-platform.

Challenges I ran into

We were not able to use the node.js Google Vision library because React Native does not run node.js. In order to resolve this, we had to make a direct API call through async await. In addition, the text was returned separately from the prices, so we had to write an algorithm to match the items with the prices. This proved difficult because each item might not have a corresponding price. We hope to improve upon this algorithm and use machine learning to help.

Accomplishments that We're proud of

I’m very proud of being able to submit the images through the API call and receive a JSON response with the text! It was very exciting! Then being able to parse through the returned text and push each item and price into a corresponding key, value pair in a dictionary was equally as rewarding. We also proud of our UI flow, which we created after hearing feedback from others at the hackathon.

What We learned

I learned a lot about how Google Vision draws boxes to read text from an image. Then I learned how I can use the x,y values to find the boxes and create “lines.” I was also a bit rusty with react, so it was good to get a refresh and learn about new functionalities.

What's next for divvi

more robust algorithm to pair words and prices accurately use geolocation services with Google maps to predict which restaurant user is at utilize menus that are posted online to help match receipt items to menu items use this database to fuzzy search shortened terms implement transaction service/API create user base and allow users to add other users to a split group work with restaurants to send receipt directly to divvi and users can split and pay accordingly - paperless

Built With

Share this project:

Updates