Inspiration
Back when I was a young kid, I got bitten by a tick and developed this large, bullseye-shaped rash. Turns out, that was one of the damning symptoms of Lyme disease. Unfortunately, I wasn't able to get it diagnosed early enough, so I slowly lost control of my arms and legs over about a month. Eventually, I had to go to school in a wheelchair - It was bad. Lyme actually has a pretty effective treatment; The problem is just in it's diagnosis. The rash that I got, though a pretty clear signal of Lyme, is often diagnosed as another type of disease.
So, the goal was to develop a system that could accurately diagnose Lyme disease from a picture of a rash. This updated version of the system takes advantage of the newest in computer vision tech - transformers - To provide even more accurate results with even less data. I'm excited to see where this project goes, and I hope that it can help a lot of people.
Some quick stats about Lyme disease to drive home the extent of the issue:
- Lyme disease is the most common vector-borne disease in the United States
- In 2022, over 60,000 new cases of Lyme disease were reported.
- The CDC estimates that the actual number of cases is closer to 476,000
- Lyme can cause nerve problems, paralysis, meningitis, or even heart problems.
What it does
It's a simple application at first glance. All you need to do is upload an image of a suspected Lyme disease rash, and we'll run it through a custom-built transformer based on OpenAI's CLIP models that can accurately diagnose it.
Another really cool benefit of this approach is the speed - This model is super lightweight compared to other similar disease detection models, because a lot of the work is done in the image embedding process. As you can see in the video demo, it's lightning fast. That's important so patients are able to get a rapid diagnosis.
What makes it different
Unlike the first iteration of this project, the updated model uses transformer models to tokenize images. This process results in much less training volume required, which in turn results in a more accurate, smaller model. This approach was only possible due to recent developments with the transformer architecture. In addition, the frontend is much more advanced than the old version, with a backend running on flask and a custom-built, minimalistic frontend.
How we built it
I trained the model with keras and tensorflow first. It relies on tokenizing images with a ViT (Visual Transformer), and then feeding those tokens through a standard multi-layer perceptron model. This process happens on a flask server acting as the backend, and results are sent over to the React frontend.
Challenges we ran into
Working with transformers was extremely challenging as it was as new technology for me, and getting past the errors was extremely difficult. In addition, I had never used Flask as a serious tool before, so setting up an API for that was challenging at times (Especially with problems relating to CORS)..
Accomplishments that we're proud of
I'm super happy that I managed to get the flask API up and running so quickly. In addition, I'm happy that I have enough experience to create a technically advanced project like this one. It relies on ML, CV, frontend, and backend development, so you need to be a jack of all trades.
What we learned
I learned a lot about how to work with transformer models and how to create a clean, simple frontend website to demo CV models. The biggest problem with a lot of these models is that they aren't deployed, so being able to deploy my own was super fun.
What's next for LANTERN
I want to try my hand at converting this app into a React Native app, so people can just take a photo on their phones instead of uploading it to a computer. I also want to find a way to host this on the web - I spent hours trying to find a free way to do it, but no provider fits my needs.
Built With
- flask
- javascript
- keras
- python
- react
- tensorflow


Log in or sign up for Devpost to join the conversation.