Inspiration

Social media can provide valuable insights on the status of a company and/or product, but when 2.5 quintillion bytes of data are created on the internet every day, monitoring an online presence can be overwhelming. Scope helps to find quantifiable data and show useful analyses of a company’s internet presence across various social media platforms and prominent internet sites.

What it does

Once the user inputs their company name and/or product, Scope searches Reddit, and Google Trends to find relevant data, and synthesizes the data to output information on how the company can improve their presence. Scope outputs the inputted company’s relevance in the news, the US States with the highest and lowest mentions, competitor data, and prominent social media posts. With the help of AI, the data from Reddit and the News are evaluated to gauge whether the posts about the company are positive or negative, and will summarize any reviews or suggestions. AI is also used to generate potential competitors, and searched with the comparison function to show the relevancy of competitors.

Overall, Scope aims to summarize a company/product’s presence on the internet, presented in a way designed to help companies improve and adapt to social media feedback.

How we built it

On the backend, we utilized various APIs for social media data extraction, and Bedrock AI for data analysis. To scrape reddit, we used Praw API, and used SerpAPI for Google Trends. For scraping Reddit, we used the PRAW API, and for Google Trends data, we integrated SerpAPI. The extracted data was imported into AWS S3 for storage and preprocessing. After cleaning the data in S3, it was passed on to our Bedrock AI model (specifically Claude) for further analysis.

The backend was built with Python and wrapped in Flask to create a connection to the frontend. We used React JS with Vite to display our frontend output, and worked in HTML, JavaScript, and CSS.

How we responsibly use AI

The AI we use is for summarization and aggregation of data. There is no presence of bias or discrimination in our use of Claude. We gather data purely based on numbers, statistics, and overall sentiments expressed online. Any data collected is also free of personal and identifiable information, ensuring privacy.

Challenges we ran into

A major challenge throughout this project was connecting the components together. Each team member had a very specific scope, so making sure that each level was functional and compatible with each other required a lot of attention to detail. Using AI to process so much data also proved to be a challenge, as it began to cause delays and required optimization efforts.

Accomplishments that we're proud of

Our biggest accomplishment was connecting all of the components together to create a final product! The variety of outputs that are shown on our page, as a result of our different individual focuses, really create a useful product that provides unique insight. We worked really hard to make sure that all of the output get represented in meaningful ways. Also, our team’s teamwork and communication should be recognized. Not only did our team’s dynamic help us delegate tasks and set clear goals in order to have a successful project, but it also made the sometimes stressful experience of a hackathon be very positive and fun.

What we learned

One of our major takeaways is that it is important to have a plan, and be keeping that big picture in mind as we work through our project. Being able to visualize the flow, understand the stack, and communicate about what needs to be done is essential in making sure that everything comes together. On the more technical side, using APIs with python was new for some of us, but we learned how useful JSON objects can be. Also, we learned that delay and memory consumption should always be considered when using AI, as it obviously takes longer than normal function calls.

What's next for Probe

Our data analysis could always be improved, as we were hoping to train our own AI model but did not have time. With a better AI and a better data set, we could really make meaningful conclusions and have stronger outputs. We also hope to expand our social media scraping technologies to gather data across all prominent platforms.

Built With

Share this project:

Updates