Inspiration
When writing a paper for class or creating research for a professor and you need sources to cite, it's imperative that anything quoted is unbiased and an accurate representation of the state of a topic. Otherwise, your paper loses credibility and you lose credibility as an author. To make the job of determining inherent bias easy and accessible, the idea for UnBias manifested.
What it does
While enabled, UnBias will extract any highlighted text from any website - highlighted headers, titles, body text, etc. and responsively display a verdict: is this text biased? Not only does it determine true/false, but also classifies the type of bias into four categories: general bias, toxicity, sentimental bias and hate speech. This way the user gets accurate and specific information on the bias they have found in a source.
How we built it
We built UnBias using Google Chrome Browser Extensions. The frontend has been created using HTML, CSS and JS with a little bit of React/Node. The frontend allows for toggling the extension on and off and for allowing the user to click and recieve their bias indication. As for the backend, JSON Python and JS are the bulk of the backend. In our Python file, we call on the Hugging Face API for it's general LLM Flan T5 Large which takes the highlighted text posted from the javascript file and classifies it into our four categories: general bias, sentimental bias, toxicity and hate speech. Based on the category, we call on another Hugging Face repo LLM which specializes in assessing the strength and confidence of that specific type of bias to dynamically output the most accurate results for a wide range of text/sentence types. The backend spits out a json formatted file with the analysis label, confidence score and raw output/result. The js file simply handles extraction of the highlighted text and posting it to the python file where the bulk of computation occurs.
Challenges we ran into
For the backend, we repeatedly ran into the trouble of finding a good prompt for the general LLM Flan T5 large and another big issue was developing the idea for a dynamic classification system. Initially we tried to use just one LLM/HuggingFace repo and this only worked in very niche cases specific that one repo. That's where the idea for a dynamic choosing of the repos came from.
Accomplishments that we're proud of
This project wasn’t an easy ride, but we’re really proud of how we managed to push through the tough spots. The frontend had its challenges, but the real focus for us was the backend. That’s where we spent most of our time, learning by trial and error. What we’re most proud of is creating a dynamic classification system instead of relying on just one LLM. We used Hugging Face models and Python to get everything working smoothly, and it’s honestly been such a rewarding learning experience. We gained a lot of valuable hands-on experience in the process, especially when it comes to backend development and working with diverse tools.
What we learned
Building this project was a full experience, and we gained a lot from it. We learned the importance of teamwork and collaboration—working together, communicating effectively, and relying on each other’s strengths. On top of it, we gained technical skills across the board. We got hands-on with both backend and frontend development. On the frontend side, we tested our skills with HTML, CSS, and JavaScript, while also exploring React/Node to make the interface more interactive. Meanwhile, on the backend, we worked with Python and Hugging Face models, learning how to dynamically classify bias and integrate various tools to improve accuracy. The project also taught us a lot about troubleshooting and testing, as we iterated and refined the system. Problem-solving was a key skill, especially when things didn’t go as planned, and we had to think critically and find solutions quickly. Lastly, we strengthened our time management skills, balancing multiple tasks and deadlines while making steady progress. Beyond coding, this project was a valuable experience in growing both as a team and as individuals.
What's next for UnBias
We plan to keep refining UnBias by improving the backend to handle more complex text. We also want to add some new features, like summarization, so users can quickly get concise, unbiased summaries of longer content. Another feature we’re excited to work on is bias search, allowing users to search for specific types of bias across different sources. Along with these, we’ll focus on making the user interface even better, ensuring it’s more accessible and easier to use for everyone. We also would want to make it from a local application to global using AWS in the future. We attempted to do so during the hackathon but were unsuccessful because AWS was delaying for long periods of time.
Log in or sign up for Devpost to join the conversation.