The future of privacy is open source. For too long, it's been fragmented by legal frameworks, compliance checklists, and slow-moving audits. Meanwhile, data moves faster than ever, and the teams on the front lines with sensitive data — engineers and data scientists — are left navigating complexity without the right tools. This is why we created Fides: the first open-source privacy engineering standard, designed to make privacy a foundational layer of every tech stack. TCP/IP, HTTP, and TLS are all open standards that power the digital world. But privacy has never had an equivalent. Instead, companies have been left to interpret regulations on their own, building one-off compliance frameworks that break as soon as systems scale. Fides changes that. In 2024, we donated Fides to the IAB Tech Lab where it became the foundation of the IAB Tech Lab Privacy Taxonomy — now the reference standard for privacy data classification across the global digital advertising ecosystem. It provides a universal, transparent standard that ensures privacy is embedded into every system, enforced automatically, and auditable at any time. By being open source, Fides allows the global engineering community to contribute, refine, and evolve privacy infrastructure as fast as technology itself advances. It transforms compliance from a bureaucratic burden into an executable, machine-readable standard, allowing businesses to define, enforce, and scale privacy without guesswork. Privacy should be built in the open, for everyone. Read the full story of how Fides became the industry standard here: https://fid.es/4eT5SYQ
Ethyca
Software Development
New York, NY 5,034 followers
The Data Privacy and AI Governance Platform to accelerate data-driven growth.
About us
Ethyca builds automated data privacy infrastructure and tools for developers and privacy teams to easily build products that comply with GDPR, CCPA Privacy Regulations. Ethyca's powerful and flexible automated data privacy and protection tools provide any business with a future-proof solution to compliance with privacy regulations across global jurisdictions. Whether you're simply collecting consumer email addresses or inferring complex data points from unlimited data points, Ethyca provides your product, engineering and privacy teams with unmatched ease of use and functionality to better care about your user's data. We believe that user privacy matters more now than ever and that the solution to managing user data is not in regulation but in code. Twitter: https://twitter.com/ethyca Instagram: https://www.instagram.com/ethyca Blog: https://ethyca.com/news
- Website
-
https://ethyca.com
External link for Ethyca
- Industry
- Software Development
- Company size
- 11-50 employees
- Headquarters
- New York, NY
- Type
- Privately Held
- Founded
- 2019
- Specialties
- GDPR, CCPA, Privacy Compliance, Privacy, PrivacyByDesign, Data Privacy Management, Data Protection Impact Assessment (DPIA) Automation, Subject Access Request (SAR) Automation, Consent Receipt Management, and Data Mapping Automation
Locations
-
Primary
Get directions
New York, NY 10001, US
-
Get directions
The Chq Building, Custom House Quay, North Wall
Dublin, IE
Employees at Ethyca
Updates
-
Of all the things we do to stay close to the market, small executive dinners consistently deliver the most value. We don't treat them like lead generation activities. The point is simpler than that: there's no better way to understand the problems you're building for than sitting across from the people who are living them. Twenty senior leaders in data, privacy, and engineering. Sitting around one table with no slides, agenda or corporate formalities. It's all about the honest conversations about what's actually hard right now. That's what our team learns most from - the unfiltered, in-person version of what practitioners are dealing with. The details that don't make it into press releases. The problems that are still too messy to have a clean narrative around. We're building infrastructure for some of the most complex data and governance challenges enterprises face. The only way to build it well is to stay close to the people on the front lines. Thank you to everyone who has attended. We're looking forward to hosting more.
-
-
-
-
-
+2
-
-
Only 2% of companies meet standards for responsible AI use. Four principles define what responsible AI is supposed to look like. Most organizations can articulate all of them. Far fewer can demonstrate them in practice. → Fairness and bias mitigation. Bias enters through training data, long before a model produces a single output. Without actively evaluating whether data is representative and testing outputs across defined groups, models don't introduce imbalance — they systematize it. → Transparency and explainability. In a 2024 McKinsey & Company survey, 40% of respondents identified explainability as a key risk in AI adoption. The reason is simple: if a decision can't be reconstructed, it can't be defended. Transparency requires preserving context across data sources, transformations, and decision logic — not just documenting intent. → Privacy and data protection. AI systems consume enormous volumes of data, often without retaining the context attached to it. Problems emerge when data is reused without clear consent, moved across systems without controls, or retained beyond its intended purpose. The constraint isn't collection — it's that usage rules need to follow data across its entire lifecycle. → Accountability and human oversight. As AI systems scale, ownership blurs. Clear accountability means every system has a defined owner, high-risk decisions have review paths, and intervention is possible when outputs fail. Without that structure, governance exists only on paper. These principles are well understood. The gap is enforcement — making sure they hold once systems are live and decisions are happening continuously. Read our full implementation guide covering where responsible AI breaks down across the lifecycle, and what it takes to build governance that actually holds in production: https://fid.es/4mT9Wuk
-
-
Trust in AI is built on infrastructure that makes privacy actionable. The acceleration of AI adoption has exposed a fundamental gap in enterprise data systems: organizations can't prove their AI is safe, compliant, or trustworthy because their data infrastructure was never designed for it. Policy documents don't prevent harm. Legal reviews don't stop unauthorized access. Audit trails don't enforce consent in real time. Ethyca's mission is to become the trusted data layer for enterprise and AI—unifying privacy, governance, and policy enforcement to make AI adoption safe and scalable. The trusted data layer Ethyca is building delivers five core capabilities: 1. Ontology & taxonomy Model and enforce data policies through a unified privacy language, enabling consistent treatment of consent, classification, and governance across disconnected systems. 2. Data intelligence Map and classify sensitive information at scale, maintaining always-on visibility across cloud environments, legacy databases, and third-party vendors. 3. Consent orchestration Enforce user preferences with precision, applying consent signals across systems in real time and revoking access dynamically when permissions change or expire. 4. Automated subject rights Handle data requests end to end, with intelligent routing, data resolution, and policy-driven response logic that eliminates manual reviews and audit gaps. 5. AI usage controls Make privacy rules executable by design in AI systems, translating legal obligations into enforceable policies across training data, inference pipelines, and model governance. The future of AI depends on organizations that can prove their systems operate safely and compliantly. Building that infrastructure is what drives everything at Ethyca. If you want to join in the mission, review our open roles: https://fid.es/3QbjYL8
-
-
Before every executive dinner, Ethyca Founder & CEO Cillian Kieran comes up with a personalized book gift for each attendee. These aren't generic recommendations. Cillian curates specific titles for each individual — based on how they think about their work, where the field is heading, and what he thinks will push their thinking somewhere new. It's a small gesture. But in a space moving as fast as data governance and AI, the signal matters: the people building this infrastructure need to be reading beyond the trade press. A few titles that have made it to the tables recently at The Grill, Undercote, and Lure Fishbar: → Atlas of AI by Kate Crawford → How Progress Ends by Carl Benedikt Frey → The Thinking Machine by Stephen Witt → The Art of Invisibility by Kevin Mitnick → The Ethical Algorithm by Michael Kearns & Aaron Roth → The Twenty-Six Words That Created the Internet by Jeff Kosseff → Abundance by Ezra Klein & Derek Thompson → Human Compatible by Stuart Russell The policy debates and compliance frameworks will keep evolving. But the underlying questions — about power, trust, agency, and what we're actually building — those require a wider frame. What's a book that's genuinely shifted how you think about data, privacy, or AI governance?
-
-
-
-
-
+1
-
-
We believe the future of privacy is open source. For too long, privacy has been trapped in legal frameworks, compliance checklists, and slow audits. Meanwhile, data moves faster than ever, and the teams on the front lines with sensitive data — engineers and data scientists — are left navigating complexity without the right tools. This is why we created Fides: the first open-source privacy engineering standard, designed to make privacy a foundational layer of every tech stack. TCP/IP, HTTP, and TLS are all open standards that power the digital world. But privacy has never had an equivalent. Instead, companies have been left to interpret regulations on their own, building one-off compliance frameworks that break as soon as systems scale. Fides changes that. It provides a universal, transparent standard that ensures privacy is embedded into every system, enforced automatically, and auditable at any time. By being open-source, Fides allows the global engineering community to contribute, refine, and evolve privacy infrastructure as fast as technology itself advances. It transforms compliance from a bureaucratic burden into an executable, machine-readable standard, allowing businesses to define, enforce, and scale privacy without guesswork. Privacy should be built in the open, for everyone. You can explore the taxonomy on IAB Tech Lab's GitHub here: https://fid.es/3QbHszN
-
-
Thanks to Barry Winkless for having our Founder & CEO Cillian Kieran on the Future Work World Podcast. It was a great chat about what it takes to make businesses more trusted. And how we're providing the plumbing and pipework to make it possible. Watch or listen to the full episode for more 👇
Trust is not a feature, it is foundational infrastructure. In a data-driven economy, organisations that embed trust into their systems and operations are better positioned for long-term resilience. Treating trust as an afterthought creates risk; integrating it from the ground up creates stability. Listen as Cillian Kieran, Founder & CEO of Ethyca, outlines why trust is becoming a core component of modern business architecture. You can watch or listen to the full episode using one of the links below, and don’t forget to subscribe to the Future Work World Podcast on our channels ⬇️ YouTube - https://shorturl.at/1ZaAz Apple Podcasts - https://shorturl.at/gv8wP Spotify - https://shorturl.at/KMyUU Amazon Music - https://shorturl.at/e253y Or Alternative Platforms - https://shorturl.at/3sM4K Barry Winkless
-
Third-party cookie deprecation has been framed as a marketing problem for years. Ethyca Chief Architect Ethan Lo thinks that's the wrong diagnosis. Lo argues the real impact isn't lost ad signal — it's that enterprises have inherited a liability they were never built to carry. And it's created a real governance crisis. When organizations relied on third-party trackers, risk was distributed across a network of external vendors. The moment that network collapsed and enterprises rushed to build first-party data programs, they absorbed total legal exposure overnight. Every piece of first-party data now requires consent that is explicitly captured, securely stored, continuously enforced, and fully auditable across every downstream system. Most enterprise infrastructure was never designed for that. Lo identifies where this breaks down in practice: consent captured at the browser means nothing if it doesn't instantly propagate to the data warehouse, the CRM, the marketing automation platform, and the AI pipeline. Manual processes can't keep pace. Batch updates create gaps. Those gaps are exactly what regulators are now auditing for. The AI dimension is where the stakes escalate. Once a model trains on unconsented data, identifying and extracting that specific information is nearly impossible. Cookie deprecation, as Lo frames it, is now a fundamental test of AI data readiness — not a web tracking problem. There are four steps to solving this problem at the architecture level: → Automate data inventory and lineage across the full stack → Establish a unified privacy taxonomy across legal and engineering teams → Orchestrate consent enforcement in real time at the system level → Embed access controls directly into the data layer — not compliance portals Organizations treating this as a marketing ops problem are solving for the wrong thing. The ones building governance infrastructure are the ones who will be able to use their first-party data — for AI, for analytics, for anything — without flinching when regulators ask. For a deeper dive into the third-party cookie deprecation problem, read the full guide: https://fid.es/4ciMfXo
-
-
Every enterprise AI conversation eventually runs into the same wall: the data underneath the models isn't trusted. That wall is what Ethyca is meant to break down — and it's exactly what our Founder and CEO Cillian Kieran will dig into with Barry Winkless on the next episode of Cpl's Future Work World Podcast. The conversation will go beyond compliance and governance mechanics. It's about what trust actually means in an era where machines are making decisions that affect people's lives, and why the organizations that engineer trust into their data infrastructure are the ones that will be able to move with confidence on AI. Episode 14 of Future Work World — "Why Trust is the New Currency in the Human Machine Age" — goes live this Thursday, April 16th. Watch it here: https://lnkd.in/dUXx8WnP
-
-
The most valuable conversations in data privacy and governance happen in rooms like these. At the 2026 Apache Iceberg Summit, Ethan Lo, Michael Brown, and the Ethyca team hosted a private dinner in San Francisco for a group of engineering and data privacy leaders. People like Baxter Stein from Capital One and Deniz Valle from Nozomi Networks joined us for the kinds of quiet conversations that never happen during an industry event. Nobody in that room was debating whether good governance starts with taxonomy, cataloguing, and operationalizing data from the ground up. That part is settled. What engineers are actively wrestling with is that the implementation of that foundation looks completely different in an AI-driven stack than it did even three years ago. AI is reshaping how data is consumed and utilized at a speed that's stress-testing the governance architectures built to manage it. Privacy and AI counsels are proliferating on the legal side — and for the first time, those teams are finding they genuinely can't build durable solutions without going deep with their engineering counterparts. Assessments, compliance frameworks, risk infrastructure won't hold up if the technical foundation isn't sound. The conversation in this space is shifting from privacy to governance. And for the engineering leaders in that room, the mandate is expanding in the actual systems they're being asked to build and maintain.
-
-
-
-
-
+1
-