Redefining Possible: What Three Women in Tech Reveal About Building AI Products
How diverse perspectives shape better technology
When we talk about diversity in tech, the conversation often stays surface-level: representation numbers, hiring quotas, pipeline problems. But there’s a deeper question that gets less attention:
What do we actually lose when we build AI without diverse perspectives at the table?
Last year, for International Women’s Day, Emily and Giulia hosted a panel with three women working across different corners of the AI industry:
Ciara Anderson
Soo-Jin Lee
Yaddy Arroyo
A year later, as we mark Women’s Day again, their insights feel even more relevant. Because the question isn’t just about who gets to work in AI.
It’s about what kind of AI we build when different voices shape it.
The full conversation is available on YouTube, but here are the themes that stuck with us, especially for anyone building AI products today.
The “non-traditional” path is now the norm
One pattern emerged immediately: none of the panelists followed the stereotypical “coding since age 12” trajectory into tech.
Soo-Jin Lee came from playwriting and teaching English. Her background in language and understanding nuance eventually led her to GenAI, where she found herself doing similar work—just with LLMs instead of students. She’s now worked at various tech companies, navigating how language complexity shapes AI systems.
Yaddy Arroyo has been building AI products since 2011, starting with UX work before that. Her path into tech was practical—she needed jobs, took opportunities as they came, and found herself working on machine learning interfaces. Now working in banking, she focuses on how AI and data transform how people interact with financial systems.
Ciara Anderson came from linguistics, earning her PhD in 2020 with a focus on syntax and semantics. She taught herself coding through boot camps and discovered conversational AI—a space where language and technology intersect in ways that fascinated her.
Here’s why this matters for AI products:
The most interesting problems in AI today aren’t purely technical.
They’re about understanding context, managing expectations, designing for trust, and navigating ambiguity. People who’ve worked across disciplines—psychology, education, design, policy—bring exactly the kind of thinking these problems require.
If your team is building AI agents, chatbots, or any kind of conversational interface, ask yourself: do the people designing it understand human behavior as deeply as they understand the technology?
The gap between what an LLM can do and what a user actually needs is a human problem, not a technical one.
The “only one” problem
Multiple panelists described the experience of being the only woman in the room. Not occasionally, but regularly, in meetings, on teams, and at conferences.
The challenge isn’t just discomfort, it’s cognitive load.
When you’re the only one, you become hyper-aware of how you’re perceived. You second-guess whether to speak up. You wonder if your idea will be taken seriously or if you’ll be dismissed.
This has direct implications for product development. When someone is spending energy managing how they’re perceived, they have less energy to contribute their actual expertise. You lose the insight, the challenge, the alternative perspective that could have made the product better.
The fix isn’t complicated, but it requires intentionality. If you’re hiring for an AI product team:
Create an environment where people don’t have to be the only one in the room.
Gatekeeping
One of the most striking parts of the conversation was about gatekeeping. Not the overt kind, but the subtle version that’s harder to identify.
It shows up in how technical skills are valued over interpersonal ones, even when the work requires both, in who gets invited to the “important” meetings, but also in assumptions about who’s technical enough, who’s strategic enough, and who’s “ready” for leadership.
The panelists talked about how they navigated this. Sometimes it meant finding allies—people who believed in their work and vouched for them. Sometimes it meant building a portfolio so strong that it couldn’t be ignored. Sometimes it meant recognizing when a particular environment wasn’t going to change and finding a better one.
For leaders building AI products, this is a mirror moment.
Are you creating subtle barriers without realizing it? Are you defaulting to the same types of people for key roles? Are you equating “technical” with “valuable” and undervaluing the people who bring strategic thinking, user empathy, or cross-functional fluency?
Advice they wish they’d heard earlier
Toward the end of the conversation, the panelists shared advice for people entering the field. A few themes stood out:
You don’t have to know everything before you start. Imposter syndrome is real, and it’s especially acute for people entering spaces where they don’t see themselves reflected. But competence isn’t about knowing everything. It’s about being willing to learn, ask questions, and iterate.
Find your people. Whether it’s a formal mentorship, a Slack community, or just one person who gets it, having people who understand your experience makes a difference. You don’t have to navigate this alone.
Your perspective is valuable. If you’re coming from a non-traditional background, it’s an asset, not a deficit. The industry needs people who think differently, who ask questions others don’t think to ask, who bring lived experience that shapes how they approach problems.
Protect your energy. Not every battle is worth fighting. Sometimes the best move is to focus on doing great work, making connections, and finding environments where you can thrive.
Watch the full conversation on YouTube to hear more about their journeys, the challenges they’ve navigated, and the insights they’ve gained along the way.
Big thanks to Soo-Jin Lee, Yaddy Arroyo, and Ciara Anderson for sharing their stories.
Thanks for reading! If you found value in this post, feel free to share it with others.




