Autodesk just posted another stellar quarter: $1.96B in revenue and 38% non-GAAP operating margin 🔥 We’re proud to partner with their engineering organization as the visibility plane for continuous improvement — helping teams see how software gets built, find what’s slowing them down, and ship better, faster. Engineering visibility and control at enterprise scale. That’s what we do. See how Autodesk uses Faros 👇 https://lnkd.in/gUJZ6gik #EngineeringExcellence #DeveloperProductivity #Autodesk #FarosAI
About us
Faros AI helps improve engineering productivity and the developer experience. Enterprises use Faros AI to increase engineering efficiency, accelerate AI transformation, and execute large-scale engineering initiatives. With no prerequisites to refactor or standardize data in advance, Faros AI analyzes task, deployment, quality, incident, security, org structure, and developer survey data from over 100 SaaS tools and custom data sources. The platform’s Lighthouse AI features leverage statistical analysis, machine learning, and GenAI to deliver critical insights, identify friction and root causes, and suggest team-tailored recommendations. Faros AI users can build custom metrics, dashboards, and reports to support unique needs, recurring operational cadences, and impromptu business analysis.
- Website
-
https://www.faros.ai
External link for Faros AI
- Industry
- Software Development
- Company size
- 11-50 employees
- Headquarters
- San Francisco Bay Area
- Type
- Privately Held
- Founded
- 2019
- Specialties
- developer productivity, developer experience, engineering transformation, AI transformation, AI technology evaluation and impact, engineering metrics, AI/ML, devops, GitHub Copilot impact, engineering modernization, engineering excellence, and cloud
Locations
-
Primary
Get directions
San Francisco Bay Area, US
Employees at Faros AI
Updates
-
Which AI models are developers currently choosing in 2026? We scoured Reddit, interviewed developers, and compiled a list of the top 5 front-runners: - GPT-5.2 (and GPT-5.2-Codex) - Claude Opus 4.5 - Gemini 3 Pro - Claude Sonnet 4.5 - Cursor’s Composer-1 Hear what developers say about each model’s strengths, limitations, and what they’re best for 👉 https://lnkd.in/d9wYDfXC
-
-
"What percentage of our code is AI-generated?" It's the obvious question. It's also the wrong one. Tracking AI-generated code volume can make sense for governance, like assessing repository risk or long-term maintainability. But using it to evaluate productivity brings back a metric we already learned to distrust. Lines of code was a poor proxy for developer productivity, and it’s just as misleading for AI. Plus, technical limitations make accurate measurement nearly impossible, and changes in code quality—along with their downstream impact—don't show up when you're just counting lines. If you want to understand AI’s real impact, the most useful signals are outcome-based: cycle time, quality, delivery velocity, and how reliably value reaches production. We put together a practical list of outcome-based metrics, ranked by how directly they inform business decisions: https://lnkd.in/gHiwu_7M
-
-
Which AI coding agents are developers actually choosing in 2026? We scoured Reddit, interviewed developers, and compiled a ranked list. 🚀 Front-runners: Cursor, Claude Code, Codex, GitHub Copilot, Cline ⚡ Runners-up (strong, but increasingly more niche): RooCode, Windsurf, Aider, Augment, JetBrains Junie, Gemini CLI 🌱 Emerging contenders to watch: AWS Kiro, Kilo Code, Zencoder What mattered most wasn’t the features. Instead, developers are considering: - “Will this burn my tokens?” → Token efficiency and price - “Does it actually make me faster?” → Productivity impact - “Can I trust the output?” → Code quality & hallucination control - “Does it understand my repo?” → Context window & repo understanding - “Where does my code go?” → Privacy, security & data control See the full landscape through developers’ eyes here 👉 https://lnkd.in/gv6muQfd Which AI coding tools are you using daily—and why? Did we miss one you think belongs in the top tier?
-
-
Be honest — is your engineering org really tracking all five DORA metrics reliably? (And yes, there are 5.) Most teams believe they are. Until someone asks why a metric changed… and no one can answer. Reliable DORA measurement is genuinely hard, especially in enterprise environments. Many tools were designed for simpler setups and struggle when reality gets messy: - Custom deployment processes break standard assumptions - Monorepos blur team-level attribution - Proxy metrics miss important context - AI adoption increases volatility - Newer metrics like rework rate remain inconsistent The result is dashboards that look precise, but don’t hold up when leaders try to use them for decisions. If your current DORA measurement software isn’t keeping up, here’s a practical selection guide to help evaluate other options for 2026: https://lnkd.in/gmqTZnJe BTW, what does good look like nowadays? With DORA transitioning to more complex classifications, we put together a simplified reference table. Use this to identify your biggest gaps and track improvement over time. Curious — which DORA metric has been hardest for your organization to measure accurately?
-
-
AI spend is easy to approve. Maintaining control over spend can be much harder. Hooray to Anthropic’s Claude Code for now transparently reporting token usage and estimated costs. But if you're only looking at tokens and dollars, you're missing the point. The real question isn't how much you're spending. It's whether that spend is delivering impact. Faros AI provides the measurement layer that connects Claude Code usage to real engineering outcomes: faster delivery, higher quality, and healthier repos. Here's what it looks like: → Usage-to-outcome correlation: Connect Claude Code adoption and acceptance rates to PR merge rate, review time, and PR size—see if AI is accelerating delivery or just shifting bottlenecks downstream. → Cost-per-commit visibility: Track spend by model (e.g., Opus vs. Sonnet) and tie it to actual output. Higher cost should mean higher impact. → Team-level adoption analysis: Identify who's getting value, who needs enablement, and where licenses are going unused. → Downstream quality tracking: Link AI-generated code to Change Failure Rate, MTTR, and rework rate. More code shouldn't mean more incidents. → Causal analysis: Separate AI's true effect from confounding factors like team composition and project complexity—so you know what's actually working. AI spend is easy to justify. Proving AI impact is harder. That's what we're here to solve. For more details on what this actually looks like in practice → https://lnkd.in/gE_Hi8G4
-
There’s a problem with AI-generated code. It’s getting cranked out faster than development workflows and infrastructure can support. Old bottlenecks are getting worse, new ones are popping up. The hardest part of bottleneck resolution is getting from “things feel slow” to “here’s the bottleneck, here’s why it’s happening, and here’s whether it’s worth addressing.” The most effective remediation approach combines the right data (normalized and contextualized) with measurement, benchmarking, alerting, and investigation. Faros AI works across your entire tool stack, normalizes data automatically, and provides insights at the level of granularity you need—whether that's a bird's-eye organizational view or a deep dive into a specific repository or team—so you can see bottlenecks early, understand them deeply, and act on them quickly. https://lnkd.in/gBRXTZxy
-
Exciting news! You can now purchase Faros AI through Google Cloud Marketplace! The developer productivity insights platform built to meet the needs of large engineering teams is now available with the billing, reporting, and governance you already use through Google. "This is what customers have been asking for: Faros AI insights from the cloud provider where they already run their business,” says 🌎 Vitaly Gordon, CEO and Co-founder of Faros AI. “Google Cloud Marketplace makes it faster than ever to unify engineering data, identify friction, and take action." Faros AI helps world-class engineering organizations achieve engineering excellence, measure and accelerate AI's impact, and deliver consistently on their innovation roadmaps. With no prerequisites to refactor or standardize data in advance, Faros AI analyzes task, coding, deployment, quality, incident, security, org structure, and survey data from 100+ tools and custom sources. It delivers critical insights, identifies friction and root cause, and suggests recommendations on how to improve. https://lnkd.in/gSyxMrMf
-
-
Context engineering beats prompt engineering. We’re betting big on that. In 2026, the teams reliably shipping AI-generated code will be the ones who’ve nailed what their agents see, when they see it, and how that context is structured. If “context engineering” still feels a bit hand-wavy, we just published a practical guide that breaks down: - Why context has overtaken prompts as the real leverage point - The five critical context engineering strategies your coding agents need - How to plug those strategies into your day-to-day workflow on real enterprise codebases Learn how to architect the full context stack → https://lnkd.in/g2S53yfS
-