Why 40% of AI projects will be canceled by 2027 (and how to stay in the other 60%)
The agentic AI race is on, and most organizations are at risk of losing it. Not because they lack ambition, but because they’re fighting three wars simultaneously without a unified strategy.
Looking at what separates successful agentic AI programs from the 40% that Gartner predicts will be canceled by 2027, the pattern is clear. Organizations aren’t failing at AI. They’re failing at the infrastructure that makes AI work at enterprise scale.
There are three underlying crises of most AI initiatives,and solving them independently doesn’t work. They have to be addressed together, through a unified AI connectivity program.
The three crises derailing agentic AI Infrastructure
Crisis #1: Building sustainable velocity
Everyone knows speed matters. Boards are demanding AI agents, executives are funding pilots, and teams are racing to deploy.
But urgency hasn’t translated to velocity. S&P Global reports that 42% of companies are abandoning AI initiatives before production. Organizations are deploying agents quickly, but then pulling them back just as quickly.
The uncomfortable truth is that many of the organizations that moved fastest are now moving backward.
The uncomfortable truth is that many of the organizations that moved fastest are now moving backward. Consider McDonald’s terminating its AI voice ordering program after deploying it to over 100 locations, or the 39% of AI customer service chatbots that were pulled back or reworked.
Speed without first establishing a foundation creates technical debt that compounds until it forces a complete rebuild.
The organizations achieving sustainable velocity aren’t just moving fast. They’re moving fast on infrastructure that supports iteration rather than requiring restarts.
Crisis #2: The fragmentation tax
While teams race to deploy, Finance and FinOps teams are watching margins erode. 84% of companies report more than 6% gross margin erosion from AI costs, and 26% report erosion of 16% or more.
This isn’t coming from strategic over-investment, but from chaos: fragmented systems, untracked token consumption, zombie infrastructure, and redundant tooling scattered across teams that don’t know what the other is building.
It’s not possible to monetize what can’t be measured.
There’s also the secondary problem that it’s not possible to monetize what can’t be measured. Organizations hemorrhaging margin simultaneously leave revenue on the table because they lack visibility into usage patterns, unit economics, and the data required for usage-based pricing.
Only 15% of companies can forecast AI costs within ±10% accuracy. Everyone else is operating on hope rather than data.
Crisis #3: The shadow AI time bomb
The third crisis is quieter but potentially more damaging. 86% of organizations have no visibility into their AI data flows. 20% of security breaches are now classified as Shadow AI incidents. And 96% of enterprises acknowledge that AI agents have either already introduced security risks or will soon.
Development teams under pressure to ship are spinning up LLM connections, routing sensitive data to models, and expanding agent-to-agent communication, all often without security review. The attack surface grows with every deployment, but visibility doesn’t keep up.
By the time organizations discover the problem — through a breach, a failed audit, or a regulatory inquiry — the damage is structural. Remediation means rollbacks, rebuilds, and reputational harm that takes years to recover from.
Why solving these separately doesn’t work
Most organizations get it wrong by treating speed, cost, and governance as independent problems that require separate solutions.
They task Dev and AI/ML teams to drive velocity, task FinOps with controlling AI costs, and task Security to build governance frameworks–all without a shared, unified approach: three workstreams and three organizational silos.
This introduces fragmentation. The structure designed to solve the problem makes it worse. This doesn’t mean these teams shouldn’t be the primary owners of their workstreams; it’s just that leaders shouldn’t approach these challenges in silos. Think about the relationships this way:
- Governance: Without it, speed creates risk. Every agent deployed without proper controls expands the attack surface. Moving fast just accumulates vulnerabilities faster. In the long term, this slows you down. Governance–when done properly–will equate to speed.
- Cost visibility: Otherwise, speed burns margin. Every deployment without unit economics is just a bet that the math will work out later. Moving fast means hemorrhaging money faster. And hemorrhaging money ultimately leaves less budget for innovation.
- Speed: Without speed, governance becomes stagnant. Manual review cycles and approval processes that worked for traditional IT can’t scale to agentic workloads. Governance that slows deployment to a crawl isn’t governance — it’s a slow path to irrelevance.
The organizations that master all three simultaneously will reap the benefits, while those that try to solve them separately will see the gaps widen.
The winners in the agentic era have built a unified infrastructure that addresses speed, cost, and governance
What winning looks like
The winners in the agentic era share a familiar pattern: they’ve built unified infrastructure that addresses speed, cost, and governance as a single integrated platform. This allows them to:
- Deploy with confidence. Teams ship agents knowing that guardrails are automated, not manual. Security and compliance happen at the infrastructure layer, not through review meetings that add weeks to timelines.
- Invest with clarity. Finance trusts forecasts because they’re based on consumption data. Product teams can model unit economics before launch. Cost attribution connects spending to business outcomes.
- Monetize what they build. Usage-based pricing is possible because consumption is metered at every layer, and AI capabilities generate revenue streams.
- See the full picture. Visibility spans the entire AI data path, not just LLM calls, but the APIs, events, MCP connections, and agent-to-agent communications that make up real-world agentic architectures.
- Move faster over time. Each deployment builds on the last, institutional knowledge accumulates, and the platform gets smarter.
This is the flywheel in action: Governance enables speed, speed enables cost efficiency, cost efficiency funds further investment in governance and velocity. The three capabilities compound when unified and collapse when fragmented.

AI connectivity: The unified platform approach
The solution to these compounding challenges isn’t another point tool. It’s a new architectural approach for how AI systems, APIs, and agents connect and run in production: AI connectivity.
AI connectivity is the unified governance and runtime layer that spans the full data path agents traverse, from APIs and events to LLM calls, MCP connections, and agent-to-agent communication.
Traditional API management handles request-response traffic between applications. AI Gateways handle traffic between agents and models. Alone, neither addresses the full scope of what agentic AI requires.
Agents don’t just call LLMs. They traverse the entire digital ecosystem, from invoking MCP tools to consuming APIs and event streams as context, coordinating with other agents, and accessing data sources across the enterprise. Each connection point requires visibility, control, and governance that all work together.
AI connectivity closes this gap by providing:
- Unified traffic management across different protocols, contexts, and intelligence, the agentic stack — REST, GraphQL, gRPC, Kafka, WebSocket, MCP, LLMs, A2A, and more.
- Consistent policy enforcement that applies security, compliance, and cost controls regardless of whether the traffic is a traditional API call or an agent reasoning through a multi-step workflow.
- Full data path observability that shows not just what agents are doing, but what they’re connecting to, what data is flowing where, and what it costs.
- Built-in monetization infrastructure that meters consumption at every layer, enabling usage-based pricing, cost attribution, and unit economics visibility.
- Developer self-service that lets teams build and deploy without waiting for manual reviews.
When governance, cost visibility, and deployment velocity share a common platform, they reinforce one another rather than compete. Teams move fast with automatic guardrails, costs stay visible with metering built into the runtime, and security scales from policies enforced at the infrastructure layer.
Kong: The foundation for an AI connectivity strategy
Here’s what we’ve built at Kong.
Kong provides the AI connectivity layer that spans the whole data path across APIs, events, and AI-native traffic, with the governance, observability, and monetization infrastructure that sustainable AI programs require.
Organizations using Kong can see and control the entire AI data path from a single platform. They can enforce consistent policies across all traffic types, meter usage for cost attribution and revenue capture, and give developers self-service access to the infrastructure they need to build and deploy agents at scale.
This is AI connectivity in practice: a unified platform that makes the speed-cost-governance flywheel actually work.
The window is starting to close
The organizations that will dominate the agentic era are building their platform foundations today. They’re not waiting for the perfect solution; instead, they’re establishing the infrastructure to support increasingly sophisticated AI workloads.
Most enterprises are still struggling with fragmented tools and siloed approaches, and the market leadership opportunities remain wide open for those who move decisively.
But the window is closing. With each passing quarter, a few more organizations adopt the unified platform approach. Once leaders separate from the pack, catching up becomes exponentially more complicated.
The question isn’t whether AI connectivity matters. It’s whether you’re building on it or falling behind those who are.