AI Security Agents Combat AI-Generated Code Risks
Endor Labs began life in 2022 trying to solve the thorny issue of application security at a time when software engineering practices were changing, with 80–90% of code being open source. Organizations were struggling to adapt to the shift in coding to open source, with scanning tools sending massive numbers of alerts to developers, who were finding that most were incorrect or false positives.
Endor Labs waded into the fray, intent on helping enterprises accelerate their AppSec practices without taxing developers, according to co-founder and CEO Varun Badhwar. The Palo Alto, California-based startup created a novel approach to the challenge, building a global call graph for applications.
Through the technology, Endor Labs could understand all the lines of code, from package dependencies, the libraries that were used and not used and what libraries each library would call.
“We could trace the entire graph of your application all through static analysis, nothing in runtime,” Badhwar told The New Stack, giving the company the ability to reduce the number of alerts by 92%. “Very quickly, we took them from, ‘This is a hopeless situation’ to ‘Just one engineer in my company can now deal with this [alert] backlog.’”
All the work to create the graph and map the myriad dependencies of open source code put Endor Labs in a position to address the next sea change in software development: the use of AI in coding and the rise of “vibe coding.” If 80% or more of code today comes from open source, that soon will shift to 80% being generated by AI, according to the company.
Protecting AI With AI
To that end, Endor Labs this week is expanding its AppSec platform to address the rapid migration toward AI-based coding by offering AI Security Code Review, an agentic AI functionality to identify risks, prioritize them, recommend remediations and automatically apply fixes.

“As developers produce this software, we are reviewing that with the lens of a security architect and application security practitioner to reason with that and understand not just the known vulnerabilities, which is where a lot of the compliance world is always fixated — ‘Let’s see the CEE [common execution environment] and the CWEs [common weakness enumerations] — but looking at fundamental architectural problems with the software being produced,” he said.
That includes seeing if the AI-generated code is exposing an API that is not correctly secured or ensuring that a new database scheme that’s pushing personally identifiable information (PII) is correctly encrypted and secured.
“We’re trying to provide that balance to basically combat AI-powered code with AI-powered security code reviews,” Badhwar said.
Developers Grab Onto AI
That makes sense. Developers are embracing AI as they look to improve their productivity and get software built and out to the market as fast as possible. GitHub in 2024 found that 97% of programmers surveyed said they are using AI coding tools at some point. Stack Overflow is seeing similar enthusiasm. In a survey of 65,437 programmers last year, 76% said they were using or planning to use such tools.
It comes with security risks, from model theft to data breaches to, more recently, the colorfully termed “slopsquatting.”
Endor Labs isn’t the only vendor making the AI-to-secure-AI play. In a blog post in January, Semgrep outlined how its AI-based Autotriage technology uses large language models (LLMs) and retrieval-augmented generation (RAG) to help its Semgrep Assistant triage security vulnerabilities in coding with 96% accuracy.
“Put another way: Assistant rarely misses a true positive, so security teams can confidently use Assistant to filter out non-exploitable findings (and tell developers that they will only be alerted when there are real issues with their code),” wrote Jack Moxon, staff product manager at Semgrep, and Seth Jaksik, a software engineer with the San Francisco company.
Endor Labs’ differentiator is the data that underpins its technology, which it’s been building up over the past three years, the CEO said. The company has analyzed 4.5 million open source projects and AI models, mapped more than 150 risk factors to each, built call graphs indexing billions of functions and libraries, and detailed exact lines where known security flaws exist.
Links Between Open Source and AI Code
Despite being steeped in open source, such data is key when addressing AI-based coding.
“As you look today, more of this code is written by Cursors and Copilots of the world, except there’s one fundamental thing: these Copilots do not write novel software,” the CEO said. “They write software that they’re trained on and learn from, which is all basically open source software. All of these models, regardless of which vendor you choose, are trained on open source software.”
Endor Labs’ expanded platform now includes dedicated AI agents built for application security that can reason over code — like developers and architects do — while reviewing code, identifying risks and recommending fixes, as security teams do.

The first agentic AI capability is around security code reviews — which will be available in May — using multiple agents to review pull requests for changes to architectures that affect an enterprise’s security posture and may not be picked up by static application security testing (SAST) and vulnerability scanning tools. Changes include adding systems vulnerable to prompt injections — where bad actors manipulate prompts to get the AI model to act outside of its security policies — or public API endpoints, changes to cryptographic tools or authorization processes or modifications in how sensitive data is handled.
Endor Labs’ MCP Server
Endor Labs is also introducing the MCP Server to address such evolutions as vibe coding, where developers use their intuition when coding, enabling them to move quickly. The plugin helps address security issues as AI-native tools like Cursor and GitHub Copilot are being used.
“This is the vibe coding era, where AI coding assistants generate large volumes of code with minimal developer oversight or review,” Amond Gupta, vice president of product at Endor Labs, and Dimitri Stiliadis, co-founder and CTO, wrote in a blog post. “Developers increasingly trust their AI assistants, often accepting suggestions with little modification. It’s fast, efficient, and transformative — but it’s also risky.”
The MCP Server is there to reduce that risk, Badhwar said.
“What it essentially does is reasons with Cursor and Copilots as they’re producing code and interacts with it to help it and prompt it to write more secure code as it’s being built, flag for it problems right off the bat, way before even a pull request is created, to say, ‘Here are the problems in the code you’re recommending’ and iterating with it,” he said.
The MCP plugin will also recommend different paths for fixing security problems.
“That interactive process is going to be extremely valuable moving forward because, if you don’t prevent these software issues from getting created in the first place, if you think you have a swamp of a backlog of CVEs and security issues that are piling up for the last decade, it’s going to get five times worse,” the CEO said.
$93 Million Windfall
As Endor Labs moves forward with this, it’s being boosted by $93 million in Series B funding led by DFJ Growth and with participation from Salesforce Ventures. Also contributing are existing investors, such as Lightspeed Venture Partners, Coatue, Dell Technologies Capital, Section 32 and Citi Ventures. The company announced in August 2023 it had raised $70 million in a Series A round.
Badhwar said the money will be used to continue developing its products and bring in more employees, with plans to grow the 140-plus workforce by 50% this year.
Endor Labs executives are also keeping a cautious eye on macroeconomics, with the CEO noting that even top economists are unsure if the world will slide into a recession or if currencies continue to weaken.
“Being in a position of strength with the capital allows us to make the big bets that we want to without compromising on product quality or our vision to really be the best … security company in the planet,” he said, noting the company’s 30x grow in annual recurring revenue (ARR) over the past 18 months and 166% net revenue retention (NRR).
Endor Labs’ Gupta and Stiliadis wrote in their blog that more is on the way.
The “launch of our MCP Server and AI Security Code Review represents the first phase of our vision for securing the future of AI-native software development,” they wrote. “Our roadmap includes additional capabilities planned for release in the coming months, all focused on providing security teams with actionable intelligence and automated remediation tools.”