Inspiration

One of the most difficult parts of going out to eat is picking where to go. Once everyone starts being polite, coming to a consensus is tough, so we set out to create a tool that would find you a new restaurant to go to based on what you want.

What it does

Users simply can tell Bert, our friendly chatbot, what they want from a restaurant (such as good chicken), and Bert will recommend the best restaurant for what you are looking for, as well as generate what your experience would be like there based on reviews and the average star rating of the restaurant.

Tested on Mac

How we built it

We fine-tuned (with pytorch) a pre-trained BERT model, a transformer-based language model using only encoder blocks for sentence classification, from huggingface transformers. We used a pre-trained BERT model since it was built to understand language ambiguously and can be applied for our specific task. This method has successfully been used in other applications including sentence prediction and sentence classification. Our BERT model tries to classify the best restaurant based on a query that the user inputs.

We also used a pre-trained a SentenceTransformer model that converted a list of restaurant reviews into numerical embeddings that was used to cluster similar reviews based on their embeddings. For the main algorithm, a dynamic query was made for the semantics searching to cluster similar reviews on a cosine similarity score of 0.5 or higher. To finalize the clusters, we calculated the review with the highest internal similarity score within each cluster. Essentially, we found the most similar review in each cluster based on how similar they were to each of their corresponding clusters. At the end, we generated a paragraph that predicted what a user will experience based on those clustered reviews as well as the average rating for the recommended restaurant.

We used tkinter to make input and output frames with infinite scroll and text boxes within the frames that feeds user inputs into the model and outputs generated text.

Challenges we ran into

Due to time constraints, we had to use a small but clean dataset, limiting the accuracy of the model. We also needed to fine-tune the BERT model, rather than pretrain it from scratch as is ideal for recommendation tasks. Since most of us are research-focused, our experience with front-end development was limited, so we had to adapt. Deciding the best dataset and model was also difficult.

Accomplishments that we're proud of

Our novel recommendation model, based on BERT, is extraordinarily versatile and can be trained to recommend almost anything with ratings. We created two different generative models: one for recommending a restaurant and one for generating the general experience one could expect from the recommended restaurant. We also successfully created a GUI with minimal clicks and incorporated it into our models.

What we learned

We learned how to make a GUI and integrate front-end and back-end development into a coherent whole without any prior experience in 48 hours. We also got a better sense of the time and computational resources needed to make large datasets and train models on them, since we used 32 cores on HiPerGator and still struggled to train our models.

What's next for We8

We plan to enhance We8’s output by adding average cost, allowing users to get recommendations based on restaurants they have previously been to, using Places API to allow filtering by distance, and training on more data (like Yelp’s 5GB dataset). We will also pretrain from scratch instead of using a model that has already been pretrained. We will also apply the underlying BERT model of We8 to other services and academic uses, such as recommending scientific papers or travel destinations.

Team Members

Aahan Dwivedi, Physics: www.linkedin.com/aahan-dwivedi/

Ben Sherwin, Math/Physics/Astrophysics: www.linkedin.com/in/ben-sherwin/

Brighten Jiang, Astrophysics: www.linkedin.com/in/brightenjiang/

Huy Tran, Physics/Math: www.linkedin.com/in/nhat-huy-tran/

Sophia Huerta, CSE: www.linkedin.com/in/sophia-huerta/

Built With

+ 33 more
Share this project:

Updates