Inspiration

The problem with online shopping is the relationship between customers and returns has become unhealthy and unnecessarily wasteful. Customers often waste time and resources looking for their perfect fit. Brands have different measurement scales and customers deliberately over-purchase in order to find the right fitting clothes. Ultimately, product returns are wasteful and dangerous for both companies and the planet. High and unsustainable return rates cost American companies alone upwards of $50 billion dollars annually and returned goods are subsequently responsible for approximately 27 million tons of carbon-dioxide emissions and 10 billion tons of physical landfill waste.

What it does

Our solution is to help customers find their perfect fit by combining the power of the LiDar sensors in new phones with powerful machine learning algorithms to take incredibly accurate body measurements up to 10mm. Customers will be able to use our application to provide their accurate measurements to retailers who in turn will also use our application on the manufacturing side to measure their own clothes and match customers with the sizes and styles that best fit them.

Benefits for the business:

  • reduced return rates
  • reduced carbon emissions and physical waste
  • reduced profit loss

Benefits for the customer:

  • efficient shopping
  • better visualization
  • high customer satisfaction

How we built it

The frontend UI was built in Swift. We used Apple's ARKit to access the iPhone's new LiDar sensors. The machine learning algorithms were built with OpenCV and python and hosted on the backend. Blender was used for all the frontend animations.

Challenges we ran into

  • Processing x, y, z, and gps coordinates from the LiDar sensor was incredibly difficult compared to real-time measuring a 2D object with only x and y coordinates.

  • Integrating OpenCV and python with Swift was particularly difficult because Apple's LiDar sensors can only be accessed through Swift and ARKit.

  • Creating and processing the image processing workflow was also significantly more difficult because we were working with thousands of vector points due to the LiDar sensor compared to basic 3D scanning.

Accomplishments that we're proud of

We're really proud that we were able to integrate so many foreign frameworks and languages that weren't necessarily built to work together.

What we learned

We learned a lot about training ml models - particularly processing image and video streams and more advanced file types with thousands of data points like .LAS.

What's next for Mez

Next steps would include:

  • building the ML layer entirely in Apple's native MLKit so integration becomes easier.
  • training a second ML model to take female measurements
  • take additional measurements based on clothing type (suits, dresses, etc.)

Built With

Share this project:

Updates