Agentic LLM browsers are turning everyday browsing into automated, AI-driven workflows but they also expose a powerful new attack surface for prompt injection and data theft.
By letting an AI “drive” the browser with your full session, cookies, and permissions, old bugs like XSS now escalate into full agent hijack and cross-site compromise.
Since the first LLM-powered browsers appeared in mid‑2025, the browser has shifted from a passive renderer to an active agent that reads pages, clicks links, fills forms, and even sends emails on behalf of the user.
Products like Perplexity Comet, OpenAI Atlas, Edge Copilot, and Brave Leo embed LLMs directly into the browsing experience, turning natural language prompts into multi-step workflows.
Varonis Threat Labs analyzed leading agentic browsers to understand their inner workings, architectural differences, and potential attack surfaces.
This autonomy boosts productivity, but it also means any mistake or compromise in the agent’s logic can have an immediate real-world impact, from unauthorized navigation to silent data exfiltration.
How agentic browsers are built
Agentic browsers all bridge local, sandboxed web content with remote LLM backends, but their architectures differ.
Comet uses deeply integrated Chromium extensions with powerful permissions (including DevTools debugger) and a chrome. runtime.sendMessage bridge that allowed domains like perplexity.ai can call to drive tools such as navigation and content capture.
OpenAI’s Atlas decouples a native Swift client (OWL Client) from a separate Chromium-based OWL Host, exposing a Mojo IPC interface that trusted OpenAI origins can use to send structured commands into the browser engine.

Edge Copilot embeds a copilot.microsoft.com iframe inside a privileged internal WebUI page, communicating via window.parent.postMessage and guarded by an allow-list, while Brave Leo loads its UI from local resources and focuses on summarization, reducing some remote XSS exposure but still coupling AI to live page content.
In this model, a “Trusted Origin” (for example perplexity.ai, openai.com, or copilot.microsoft.com) becomes a high-privilege control plane for the AI agent.
If an attacker gains code execution on a trusted domain through XSS, subdomain takeover, DNS spoofing, or backend RCE, they can bypass the LLM’s reasoning layer and talk directly to privileged browser APIs.

In Comet, this means abusing externally_connectable plus chrome.runtime.sendMessage to invoke powerful tools and even read local files or internal network resources; in Atlas, it means driving the Mojo IPC layer beneath AI guardrails; in Edge Copilot, it means calling hidden “shadow tools” via crafted postMessage payloads.
Because the agent runs with the user’s cookies and permissions, these attacks can break Same-Origin Policy boundaries, enabling cross-tab data theft, forced navigation, silent downloads, or impersonation actions such as sending email and initiating transactions.
Prompt injection and data void abuse
Indirect prompt injection where malicious instructions are hidden in page content, metadata, or even titles remains a central risk in LLM browsers to navigate to google.com by using the copilot.microsoft.com execution context as follows.

A single XSS or prompt injection no longer just steals a cookie; it can weaponize an always-on, high-privilege automation layer that clicks, types, downloads, and sends data across tabs with legitimate user credentials.
Research shows that when agentic browsers summarize or analyze a page, they often feed large chunks of untrusted HTML directly into the model, enabling hidden prompts to instruct the agent to open sensitive sites, exfiltrate data, or call tools the UI never exposes.
As part of STARTAGENT, the Perplexity backend first sends a JWT to the extension to enable subsequent communication over a WebSocket.

Data-void attacks raise the stakes further: if an attacker controls the only content on an obscure topic, the LLM may treat that malicious page as ground truth, voluntarily following its instructions and, for example, loading weaponized sites via “GetContent” that trigger drive-by downloads or additional script execution.
Beyond content injection, exposing or inferring system prompts allows attackers to tailor payloads to each browser’s internal rules, selectively evading filters and driving more reliable exploitation.
Yet assessments from multiple security teams show that current guardrails are inconsistent and often lag behind new attack patterns, leaving AI browsers as a prime target for real-time fuzzing and offensive research.
The core paradox of the AI browser is that to be useful, the agent must cross the very isolation boundaries that traditional browser security worked for decades to harden.
Many of the impacts sensitive document access, anomalous file reads, or unusual outbound connections will surface in backend systems rather than in the browser itself, making data-aware detection, strict origin and tool scoping, and continuous security testing essential as agentic browsing moves into mainstream use.
Follow us on Google News, LinkedIn, and X to Get Instant Updates and Set GBH as a Preferred Source in Google.





