Inspiration

We were inspired by Amazon’s return-to-office push. After Amazon announced a stricter five-day in-office policy starting in January 2025, a Blind survey reported that 73% of surveyed Amazon employees were considering quitting. That showed us how expensive and risky HR policy changes can be when companies only learn the reaction after the policy is already public. Echo is our attempt to give companies a “wind tunnel” for people decisions before they ship them.

What it does

Echo lets a company paste in a draft policy, such as a return-to-office mandate, compensation change, layoff announcement, benefits update, or hiring freeze, and then simulates how a workforce might react over time. Instead of only showing abstract sentiment scores, Echo shows employees taking actions: messaging peers, posting in channels, going quiet, requesting exceptions, defending the policy, or even updating LinkedIn. The goal is to help HR teams see possible problems early and improve policies before they damage trust.

How we built it

We started by creating a clear product plan: the app should take any policy, understand what it changes, simulate employee behavior, and show the results visually. Then we researched how agent-based simulations can be used to model human behavior, especially the idea that agents should take realistic actions rather than just receive sentiment scores. After that, we collaborated with AI coding models to generate and connect the frontend, backend, policy parser, agent simulation logic, action feed, and recommendation system.

Challenges we ran into

One of the hardest challenges was making the simulation feel realistic without becoming random or overly scripted. If every employee reacted the same way, the demo felt fake; but if the agents had too much freedom, the output became inconsistent. We solved this by combining deterministic scheduling with LLM-generated actions, so influential employees could react earlier, peer effects could spread through the company, and the final sentiment score could be derived from actual behavior rather than invented directly.

Accomplishments that we're proud of

We are proud that Echo does not just produce another HR dashboard; it creates a live behavioral simulation. The action feed and org graph make the result feel concrete: you can actually watch a policy spread through a company and see which teams become frustrated, which people escalate, and which groups stay engaged. We are also proud that the product is not limited to one hardcoded demo and can handle many different types of HR policies.

What we learned

We learned that simulating people is much more powerful when you focus on behavior first and sentiment second. A number like “eNPS dropped by 13 points” is useful, but it becomes much more convincing when you can trace it back to specific actions, such as employees venting in Slack, managers receiving escalation messages, or high-value employees showing flight-risk signals. We also learned how important strict schemas, fallback logic, and clear product contracts are when building reliable AI products.

What's next for Echo

Next, we would focus on finding real customers, especially HR teams at fast-growing startups, remote-first or hybrid companies, and companies preparing sensitive policy changes. We would also add custom workforce uploads, so a company could simulate policies on its own org structure rather than a demo workforce. Longer term, Echo could become a policy testing platform where HR leaders compare multiple drafts, generate manager FAQs, and predict which communication strategy will create the least employee backlash.

Built With

Share this project:

Updates