Inspiration

We were inspired by our personal experience with toxicity and non-inclusive language on twitter.

What it does

We built a chrome extension to blur tweets that have non-inclusive language.

How we built it

We built the chrome extension based on the chrome documentation, using html, js, and manifest files. We queried the tweets on a Twitter page, sent the tweets to the co:here toxicity classification API and then based on the result decided whether to blur the tweets on the page.

Challenges we ran into

Due to the fact that Twitter is made on react and has dynamically updated elements, we had to use Mutation Observers to successfully access the tweet data. Thus, there were many updates and the call limit to the co:here API free tier was exceeded.

Accomplishments that we're proud of

We were proud of our ability to stay reslient through this learning journey and spending many hours debugging. It was satisfying to set up the co:here API to easily access a NLP model. Furthermore, it was our first experience with web development and programming in JavaScript.

What we learned

We learned how to set up a chrome extension with html and js files. We also learned how to communicate between js files using the chrome.tabs API. Furthermore, we learned how to access dynamic page elements and update the DOM with JavaScript. Lastly, we learned how to feed data to a NLP model.

What's next for Twitter Inclusivity Filter

We would like to add a button to toggle each post individually along with the global toggle. Also, it would be necessary to have full co:here API access for production.

Built With

Share this project:

Updates