Genesis
Political discovery tool to promote interest in congressional proceedings. An interactive 3D display of bills leverages threeJS, LLMs, and vectorization techniques to summarize, cluster, and display.
Politicosmos is built as a proof of concept that something as seemingly boring as congressional proceedings can be made interesting and engaging. This is a crucial factor in solving the age-old problem of most people not caring about politics, and acting largely based off of little-to-no (or incorrect) information.
How it's Built
Politicosmos is a political discovery tool through a 3D visualization in which each bill is represented as a star. Congressional bills are clustered together in a manner where bills with similar content are placed closer together. The user can navigate around the scene in 3D space and upon clicking on a star is provided information about the respective bill, including cosponsors, votes, and even a 100 word approachable summary generated using with an LLM.
The backend of the project involves statically aggregated data. A webscraper was built on public congress.gov APIs, and used to scrape the entirety of the 119th congress' bills to-date. This is stored in an SQL server for easy and efficient access from the rest of the project. Further post-processing was applied to generate summaries and vectors
This step was conducted through carefully engineering a prompt and passing the bill text directly to an LLM. It was instructed to return a 100-word summary of the bill, a categorization, and some tags (from a pre-defined list), directly in JSON. These summaries were then passed to a doc2vec neural network run locally, which produced a high-dimensional vector. t-SNE dimension reduction was applied, and ultimately a similarity index was applied to cluster the vectors in 3D space.
Finally, this data was served through a threeJS + Next.js frontend, allowing for server-side rendering to populate the data, with a bridge to the UI.
Challenges
Surprisingly, what we expected to be more difficult (i.e. doc2vec) had much fewer problems than what we expected to be simpler (i.e. rendering and scraping). The frontend ended up being very messy code, which made adding more complex features difficult. Creating a scraper also posed challenges as the APIs are ill-defined, so more edge-cases and possibilities had to be continually accounted for.
Lessons
A few key lessons can be highlighted. First, maintaining clean code is a necessity when building out a product beyond the initial stages. Second, there is a lot of nuance and interesting results that have only happened once or twice in history, that are revealed when one begins making assumptions about an entire dataset (and seeing them invalidated!). Finally, certain key parts of a project can simply work out; doc2vec ended up being quite nice, as others had already implemented most of what we needed!
Built With
- doc2vec
- docker
- gemini
- nextjs
- postgresql
- react

Log in or sign up for Devpost to join the conversation.