-
-
Touchless sensing. Hand can be over two centimetres to be sensed
-
Laptop with LumiGUI form connected with LumiGUI hardware device
-
MongoDB database with the MongoDB terminal. Unique configuration
-
Product front view with hidden wires
-
Top view with speech indicator light on
-
Landing page that allows users to begin the journey
-
Future plans for multipage forms
-
Prusa slicer which helped us 3D print the components
Inspiration
👋 Our project began with a wave.
We noticed in washrooms that there were motion sensors that were used to turn on faucets and we thought it was a great way to promote accessibility and hygiene compared to the alternative, hand turned faucets. We thought if we could apply this concept to filling out forms at hospitals, a different application where germs and sickness are a risk, we could create a great product that would address these concerns in a practical way. To address this issue, we have developed LumiGUI, a portable and accessible solution for medical form inputs.
What it does
The project aims to develop an intuitive and user-friendly interface for filling out hospital forms. To achieve this, an Arduino hardware project is set up near the monitor, which can control the movement of the form without requiring the use of hands for completion.
Users can easily answer the questions on the form without physically touching the screen or any buttons. Once the form is complete, they can submit it, and the information will be sent to the appropriate healthcare provider or administrative office.
This project can be extended to other fields and disciplines where touch-free forms are desirable. By eliminating the need for physical contact, this technology can improve efficiency and reduce the risk of transmitting infections.
How we built it
There were two parts to the project: an Arduino hardware side and a website form connected to the MongoDB Atlas Database.
The hardware used in the product consists of photo-resistors that detect a change in light. When a user places their hand near the product, it blocks more light from the resistor, causing a change in current. This allows the device to detect the user's presence without the need for physical contact. The Arduino microcontroller was used to process the signal from the photoresistors and connected to the serial port to allow for device connection. Pyserial was used to read the serial port and pyautogui used to bind serial commands to Windows hotkeys and keypresses. The resulting product is a professional and intuitive device that can be easily deployed in hospitals. Additionally, to create a professional product suitable for deployment in hospitals, a custom 3D enclosure was designed and printed. The design of the enclosure is minimalist, with easy-to-identify markers that make it easy to use and understand.
The software component of this project involved developing a website that could receive input from the hardware. Additionally, the website was programmed to use Windows hot keys to receive speech input. User-friendly forms were created to make the website easy to navigate. Once the user input was received, it was saved and submitted to a secure MongoDB Atlas database. The database was configured with hidden API keys to ensure the safety and security of private health information. The combination of the hardware and software components resulted in a final product that we are proud to showcase. It provides an intuitive and user-friendly experience while also maintaining the privacy and security of sensitive health information.
Challenges we ran into
- During the development of our project, we encountered challenges with the React Speech Recognition tool when it came to accurately transcribing forms and saving the information to the Atlas database. To address this issue, we decided to use Windows hot keys instead, which proved to be a more effective solution.
- We had no prior experience with 3D printing in a hackathon, so we were inexperienced with the time constraints. We printed a few test products, and in the end, we were able to produce a successful product by modifying a boxed design to create a honeycomb pattern. A minimal design went a long way. This allowed us to create a more unique and customized 3D enclosure.
- In the course of the project, we also had the opportunity to work with Blender for the first time. Although it required a steeper learning curve than traditional CAD software, we found that it allowed us to be more creative in our designs. For example, we were able to engrave the name of our project onto the device for a more personalized touch.
Accomplishments that we're proud of
- It was an exciting experience to learn how to interact with the database from the backend, which opened up a range of possibilities for future projects.
- To ensure the success of the project, we created a backup plan in case the speech API failed. If necessary, users could still complete the form with text and save it to the MongoDB database.
- Despite having limited prior experience with 3D printing, we were able to quickly adapt and print a prototype during the hackathon. The ability to shift our ideation phase and rapidly create a tangible product was a surreal experience.
- Working with MongoDB and its command line interface proved to be a complex process. However, we were able to navigate the database using shell commands and the command line interface.
- connecting Arduino with the computer creating an integrated project. This was new to us where typically you would have one or the other
What we learned
- Prototyping takes time. It was good that we got printing Saturday noon time since we had a few hours to wait until the print came
- The whole ideation phase for hardware-software required clear planning and execution (especially since the two are dependent on one another)
- We also learned how to set up an Express server with routes and models, marking our first experience with backend development. In past hackathons, we used fetch and axios on third party APIs to avoid the need for our own backend, but here we used them with the MongoDB database. This knowledge and experience will be valuable in future projects -We learned how to setup github submodules to split our frontend and backend
What's next for LumiGUI
- Deployment and user testing to deal with edge cases and iterate to improve.
- Multipage forms that allow for different types of content to be collected -Some kind of microphone enclosure that allows users to be discrete in a public setting

Log in or sign up for Devpost to join the conversation.