Inspiration
We wanted to build a tool that made medical diagnostics easily accessible to people all over the world. We thought there was potential to make use of a convolutional neural network, and that an effective tool would help everyone, from those who face great struggles in accessing medical support to people like you and me.
Many skin lesions are benign, but it is often difficult for humans to tell these apart from malignant lesions, particularly if they are not medically trained. People frequently forgo medical checkups and are therefore at risk of missing early interventions that could be lifesaving. The availability of a preliminary skin lesion assessment in seconds could make a huge social impact and was an exciting challenge for all of us.
So we found some data and got to work! And skinsight was born.
What it does
skinsight allows the user to take multiple pictures of an area of their skin using their Android phone, which are then anonymously sent to the model that we trained, which is hosted on the Google Cloud Platform. Basic statistical analysis of the results will then be carried out and returned in an accessible format to the user's phone.
How we built it
We built the front-end using Android Studio, which combines an XML layout system with Java. We then used a Python script to send image data to Google Cloud Platform via the ML API The backend is a deep convolutional neural network trained on over 20,000 benign and malignant skin images. This allows accuracies over 90% and low misclassification rates
Challenges we ran into
1) Finding a medical area of focus for which there was sufficient data available. (We originally were considering exposure therapy tools and tonsilitis diagnostics.) 2) Learning to write an Android app, when we barely knew Java. 3) Finding out the limitations of using Xamarin and C# for app development, and switching over to Android Studio instead. 4) Optimising CNN training for dataset and available time
Accomplishments that we're proud of
1) Training a deep network that compares in accuracy to those at the cutting edge of research. 2) Learning basic Android app development. 3) Getting the camera to work on Android! 4) Working to build something really cool and socially impactful, with the potential to save lives! 5) Dragging my friend Oskar down all the way from Oxford (writes Peter).
What we learned
Loads of new frameworks; the value of patience when waiting for our model to be trained; how cool all this AI is; etc.
What's next for SkInSight
Improving functionality; ironing out bugs; then potentially making it cross platform and even rolling it out publicly as a startup.
Built With
- android-studio
- google-cloud
- java
- python
- tensorflow
Log in or sign up for Devpost to join the conversation.