Inspiration

Very often, I hear my parents complain about the same thing every week.

"Your grandparents never know how to do their documents!" "All they have to do is search it up online!" "I don't have the patience to help them work through it everyday."

I've seen my mom try to teach them countless times how to take a picture to translate a document, and then search up how to fill it out on google. But it's difficult, because many people of that generation never had to interact with this type of digital technology growing up. Why do we have to use it now? Our team decided to look at it from a different angle. What if they could get additional help with their documents through an interface they're far more familiar with, conversation?

What it does

Lily is an un-intimidating, AI-powered flower designed to help people navigate complex medical and financial documents through natural conversation. It consists of two parts:

  1. The Lily Hardware: A friendly, flower-shaped device equipped with a camera, microphone, and spatial audio. Instead of struggling with a smartphone, a user simply holds a document they may receive in the mail up to Lily. Lily sees the document and walks through the necessary steps in understandable language, regardless of language barriers, accents, speech impediments, or vision impairment.

  2. The companion app: While Lily assists the senior, it simultaneously records and documents the conversation. Any areas of confusion, needs for reminders, or just fyis link directly to our accompanying app. This allows family members to stay informed and intervene only when necessary, keeping communication clear while still maintaining agency and independence.

Imagine a helper equipped to assist any type of person for any type of document, without limit to patience. That's Lily.

How we built it

We used React Native to build an IOS and Android app, that communicates with a Python backend, which our Arduino Uno-Q sends its requests to. This required strong coordination across API boundaries, as their are a lot of moving parts for this system to work.

Challenges we ran into

Choosing Python as our backend langauge of choice was a double-edged sword. On one-hand, Python's extensive collection of libraries made prototyping quick and fast. However, the lack of type-safety ended up shooting us in the foot by the end. Typo's and misunderstandings quickly propagate into hour-long debugging sessions, with very little observability. In the future, I think Protobuff with gRPC would probably time in the long run, as it forces consideration of service boundaries at the very start, and can compile into type-safe functions and parameters across various languages (ConnectRPC).

Accomplishments that we're proud of

Given the scope of the project, we naturally had to balance a lot of different interfaces. While we were typically accustomed to working with just design, or just hardware, or just training AI software, this was something new to integrate everything.

But given that we had a team with such diverse skillsets, we were able to use constant collaboration and teaching to put together something we could all call truly unique.

What we learned

Two of our team member's had never used SQL or S3 in any real capacity before, and it was a great opportunity for them to learn, especially with a platform like this that has a lot of moving parts. We scoped the MVP a little too big, so much of our time was spent building out features that didn't make the final cut. However, we had a lot of fun putting the vision together, and trying to bite off more than we could chew.

What's next for Lily

While we were able to get most of the bare bones functionality for the MVP during the course of the hackathon, we all see a clear foundation to build off of. What if we could train the camera to remember people and voices? What if we could add support for different disabilities such as Parkinson's? What if we could make different models of flowers? These are all things we hope to explore soon!

Built With

Share this project:

Updates