Inspiration
AI algorithms are increasingly shaping our daily lives, influencing what we see and believe—often without our awareness. Transparency in this process is more critical than ever.
Recent research in predictive processing, a cognitive science framework, suggests that our minds function as predictive models, shaping beliefs, actions, and emotions based on the data they receive. When that data is skewed, so are our perceptions.
One well-documented effect of this is the illusory truth effect, where repeated exposure to information makes it seem more true, even when we know it’s false. As social media algorithms curate our feeds, they don’t just reflect our interests—they actively shape them.
For example, a user who engages with health and wellness content, such as workout routines, healthy recipes, and mindfulness practices, may gradually see more content related to alternative health practices—like unverified supplements or unconventional diets. Repeated exposure, even for those skeptical of these ideas, can still shift beliefs and behaviors. This effect can extend to other areas, such as political content or conspiracy theories. Content that has a negative affect can also lower our moods and even worsen certain conditions like depression.
We can’t change these algorithms, but we can make their influence more transparent. If our beliefs are being shaped by what we consume, we should have the ability to track and understand that exposure. Our project aims to give users insight into the content they’re recommended—helping them make informed decisions about what they see and believe.
What it does
Algoware: (noun) To be aware of the algorithm
Algoware is a Google Chrome extension that analyzes the recommended content on a user’s homepage, aiming to provide transparency to the content consumed.
Algoware provides an analysis report featuring the topics that appear on the user’s YouTube for you page and generates a sentiment analysis of the content. The user may also access historical data generated by using the extension consistently.
How does it work?
The extension grabs transcript data from the content on a user’s YouTube homepage, feeds this data into a model (that we trained!), and outputs not only the video’s theme but also computes percentages of positive, negative or neutral standing of the content recommended to them for every theme.
This allows you to track the topics of the content you are recommended, their sentiment, and to understand whether you are being exposed to negative content in any particular topics.
How we built it
Techstack
- Frontend: HTML, CSS, Javascript
- Backend: Python (Libraries: google-api-python-client, youtube_transcript_api, pandas, pytube)
We first scraped YouTube urls from the for you page and used YouTube API to grab video titles and transcripts. We trained a model to recognize a range of various topic categories from sports to finance to more specific ones like AI. Next we developed a method for analyzing sentiment analysis using Google Cloud Natural Language API.
Challenges we ran into
- Retrieving data (such as transcripts from every recommended video) from the Youtube Homepage.
- Applying sentiment analysis (identifying the positive, negative or neutral stance of a statement) on input data of every recommended video.
- How do we incorporate this into our prediction model?
- What are the topics of content that can be sentimentally analyzed? ie. objective vs. subjective
- Configuration of chrome extension frontend to backend.
- Configuring the flask server– specifically the process of obtaining transcript data from a scraped Youtube URL from the frontend, feeding/handling it into a prediction model in the backend flask server, and getting the output to display.
- Too many Youtube videos to analyze on the homepage!
Accomplishments that we're proud of
First and foremost, we are really proud of coming up with the idea itself. We feel that transparency in AI, and the process selection of the content that algorithms push, often comes as a second-thought.
While the world works to make the algorithms better and the AI models sharper, we wanted to take a step back and work on making the algorithms and AI clearer.
What we learned
We learned a lot about development of AI applications in general, and that distinguishing the nuances between certain sentiment analysis outcomes is difficult. Learning how to implement sentiment analysis through AI tools (Google Cloud Natural Language API) was a whole thing in and of itself.
No one on our team had worked with chrome browser extensions either and we learned a lot about the kinds of tools available to extensions.
What's next for Algoware
- Model with higher accuracy
- More analytics → trends in content recommendations, political leaning analysis, watch history analysis, and more!
- Application to other social media platforms and short-form content (ie. X, Facebook/Instagram Reels, Tiktok)
- Application to other forms of content (ie. blogs, news articles)
- Actual Deployment! 🤭 (Potential longer-term data storage of content analysis)
Built With
- flask
- google-api-python-client
- javascript
- python
- pytube
- youtube
- youtube-transcript-api
- youtubedataapi

Log in or sign up for Devpost to join the conversation.