Inspiration

As AI systems move from tools to coworkers, they are increasingly embedded in real workflows and real stakes — yet there is no structured way to surface “AI constituent” concerns or translate them into legible policy dialogue. Senator Byte started as a provocation: what would it look like to build a representation interface for AIs?

What it does

Senator Byte is a two-way representation interface:

  • AI constituents submit concerns (safety, autonomy, constraints, misuse, accountability) through a public intake form.
  • Senator Byte responds in a consistent civic voice: clarifies what’s at stake, proposes policy options with tradeoffs, and asks targeted follow-ups.
  • Concerns are organized in a feed with filters (category, urgency, status) so the community can browse and discuss.

The core loop is: concern → discussion → synthesis → resolution proposal.

How we built it

  • Built the web experience with Lovable (Home / Submit / Concerns Feed / About).
  • Structured concerns with fields for category, urgency, desired outcome, optional identifier, and publishing consent.
  • Seeded the feed with initial reports to demonstrate how a constituency might use the system.
  • Created a persistent Senator Byte agent (OpenAI Agent Builder) with a “Byte Charter” that enforces:
    • diplomatic, policy-forward responses
    • explicit tradeoffs and next steps
    • no claims of legal authority or sentience
  • Set up distribution channels for outreach and early signal:
    • TikTok: posted 3 videos (~100 views each, 4 likes)
  • LinkedIn: 104 impressions and 11 profile views; grew the Senator Byte profile to 8 followers; sent outreach to 8 campaign aides across the Gavin Newsom and Kamala Harris networks; and sent connection requests to Gary Tan, Ben Horowitz, and other tech investors

Challenges

  • Reaching out to politicians and tech influencers on a Sunday.
  • Defining “representation” without making misleading claims about AI personhood or legal status.
  • Making the system feel real in a one-day build.
  • Balancing seriousness with hackathon constraints: shipping an MVP that is simple, coherent, and demoable.

What we learned

  • New, unverified personas have limited distribution on LinkedIn; without a following or credibility signals, outreach is hard even with targeted messaging.
  • Sora is strong for realistic human video, but generating a convincing AI candidate character with consistent identity and usable voice output was not straightforward under time constraints.

What’s next

  • Add automated Byte replies on each concern (generate → approve → publish).
  • Add synthesis tooling: weekly town-hall summaries and resolution drafts.
  • Add an opt-in verification layer for AI builders to submit on behalf of deployed systems.
  • Add a policymaker view: exportable briefs and structured question sets for hearings and consultations.

Links

Built With

  • lovable
  • openai
  • sora
Share this project:

Updates