Cursor Release Notes
Last updated: Mar 6, 2026
- Mar 5, 2026
- Date parsed from source:Mar 5, 2026
- First seen by Releasebot:Mar 6, 2026
Build agents that run automatically
Cursor unveils Automations to run always-on agents triggered by Slack, Linear, GitHub, PagerDuty or custom webhooks. Agents spin up cloud sandboxes, learn over time, and cover security reviews, incident response, weekly digests, and more, turning code workflows into a factory.
We're introducing Cursor Automations to build always-on agents.
These agents run on schedules or are triggered by events like a sent Slack message, a newly created Linear issue, a merged GitHub PR, or a PagerDuty incident. In addition to these built-in integrations, you can configure your own custom events with webhooks.
“I love that automations work for both quick wins and more complex workflows. I can schedule the obvious stuff in seconds, but I still have full flexibility to catch any webhook or plug into custom MCPs when I need to.”
Trent Haines
Software Engineer, DecagonUpgrading the software engineering pipeline
With the rise of coding agents, every engineer is able to produce much more code. But code review, monitoring, and maintenance haven’t sped up to the same extent yet. At Cursor, we’ve been using automations to help scale these other parts of the development lifecycle.
When invoked, the automated agent spins up a cloud sandbox, follows your instructions using the MCPs and models you've configured, and verifies its own output. Agents also have access to a memory tool that lets them learn from past runs and improve with repetition.
As we’ve run more automated agents on our own codebase at Cursor over the past several weeks, two categories of automations have emerged.
Review and monitoring
Automations are great for reviewing changes. They can catch and fix everything from style nits and inconsistencies to security vulnerabilities and performance regressions.
In fact, Bugbot is in many ways the original automation! It runs when a PR is opened or updated, gets triggered thousands of times a day, and has caught millions of bugs since we first launched it. Automations allow you to customize all kinds of review agents for different purposes. Here are three we use at Cursor:
Security review
Our security review automation is triggered on every push to main. This way, the agent can work for longer to find more nuanced issues without blocking the PR. It audits the diff for security vulnerabilities, skips issues already discussed in the PR, and posts high-risk findings to Slack. This automation has caught multiple vulnerabilities and critical bugs at Cursor.Agentic codeowners
On every PR open or push, this automation classifies risk based on blast radius, complexity, and infrastructure impact. Low-risk PRs get auto-approved. Higher-risk PRs get up to two reviewers assigned based on contribution history. Decisions are summarized in Slack and logged to a Notion database via MCP so we can audit the agent’s work and tweak the instructions.Incident response
When triggered by a PagerDuty incident, this automation kicks off an agent that uses the Datadog MCP to investigate the logs and looks at the codebase for recent changes. It then sends a message in a Slack channel to our on-call engineers, with the corresponding monitor message and a PR containing the proposed fix. This has significantly reduced our incident response time.
Chores
We’ve also found automations useful for everyday tasks and knowledge work that require stitching together information across different tools.
Weekly summary of changes
This automation posts a weekly Slack digest summarizing meaningful changes to the repository in the last seven days. The agent highlights major merged PRs, bug fixes, technical debt, and security or dependency updates.Test coverage
Every morning, an automated agent reviews recently merged code and identifies areas that need test coverage. It follows existing conventions when adding tests and only alters production behavior when necessary. The agent then runs relevant test targets before opening a PR.Bug report triage
When a bug report lands in a Slack channel, this automation checks for duplicates and creates an issue using Linear MCP. The agent then investigates the root cause in the codebase, attempts a fix, and replies in the original thread with a summary.
How Rippling uses automations
Teams outside Cursor have already started building automations. Abhishek Singh at Rippling set up a personal assistant. He dumps meeting notes, action items, TODOs, and Loom links into a Slack channel throughout the day. A cron agent runs every two hours, reads everything alongside his GitHub PRs, Jira issues, and Slack mentions, deduplicates across sources, and posts a clean dashboard.
He also runs Slack-triggered automations for creating Jira issues from threads and summarizing discussions in Confluence. Singh and Rippling have extended their use of automations to handle tasks like incident triage, weekly status reports, on-call handoff, and more. The most useful automations get shared across the team.
“Automations have made the repetitive aspects of my work easy to offload. By making automations to round up tasks, deal with doc updates, and respond to Slack messages, I can focus on the things that matter. Anything can be an automation!”
Tim Fall
Senior Staff Software Engineer, RipplingThe factory that creates your software
All of these automations are powered by cloud agents that use their own computers to build, test, and demo their work. Now you can build the factory that creates your software by configuring agents to continuously monitor and improve your codebase.
“We built our software factory using Cursor Automations with Runlayer MCP and plugins. We move faster than teams five times our size because our agents have the right tools, the right context, and the right guardrails.”
Tal Peretz
Co-founder, RunlayerTry creating an automation at cursor.com/automations, or start from a template. Learn more in the docs.
Original source Report a problem - Mar 5, 2026
- Date parsed from source:Mar 5, 2026
- First seen by Releasebot:Mar 6, 2026
Automations
Cursor unveils automations for always-on agents with triggers and schedules across Slack, GitHub, and more.
Cursor automations
Cursor now supports automations for building always-on agents that run based on triggers and instructions you define.
Automations run on schedules or are triggered by events from Slack, Linear, GitHub, PagerDuty, and webhooks.
When invoked, the agent spins up a cloud sandbox and follows your instructions using the MCPs and models you've configured. Agents also have access to a memory tool that lets them learn from past runs and improve with repetition.
Create automations at cursor.com/automations, or start from a template. Read more in our announcement.
Original source Report a problem All of your release notes in one feed
Join Releasebot and get updates from Cursor and hundreds of other software products.
- Mar 4, 2026
- Date parsed from source:Mar 4, 2026
- First seen by Releasebot:Mar 5, 2026
Cursor in JetBrains IDEs
Cursor ACP lands in JetBrains IDEs enabling agent-driven development with frontier models from OpenAI, Anthropic, Google.
Cursor ACP in JetBrains IDEs
Cursor is now available in IntelliJ IDEA, PyCharm, WebStorm, and other JetBrains IDEs through the Agent Client Protocol (ACP).
With Cursor ACP, developers who rely on JetBrains for Java and multilanguage support can use any frontier model from OpenAI, Anthropic, Google, and Cursor for agent-driven development.
Install the Cursor ACP directly in your JetBrains IDE from the ACP Registry, and authenticate with your existing Cursor account.
Read more in our announcement.
Original source Report a problem - Mar 4, 2026
- Date parsed from source:Mar 4, 2026
- First seen by Releasebot:Mar 4, 2026
Cursor is now available in JetBrains IDEs
Cursor ACP lands in JetBrains IDEs, enabling agent-driven coding with frontier models from OpenAI, Anthropic, Google, and Cursor. It includes secure codebase indexing, semantic search, and model-specific optimization for peak performance inside IntelliJ, PyCharm, WebStorm. The ACP is free for paid plans.
Coding with Cursor in JetBrains IDEs
Using Cursor ACP in JetBrains IDEs offers many of the benefits that make our agents effective across all surfaces.
Different models are better suited for different kinds of tasks. With Cursor ACP, developers can explore and choose frontier models from OpenAI, Anthropic, Google, and Cursor. Our agent harness is also custom-built for every model to optimize output quality and performance.
Cursor also uses secure codebase indexing and semantic search to understand large enterprise codebases. ACP combines these capabilities with deep code intelligence and tooling in JetBrains IDEs."JetBrains has always seen its mission as bringing the best of the industry to our users. I'm very excited about Cursor becoming a special guest in the family of ACP-compliant agents in JetBrains IDEs. In this setup, developers stay in control of their environment, while Cursor brings the powerful AI assistance that has earned it such popularity. This collaboration looks like a win for Cursor, for JetBrains, and most importantly for developers."
Aleksey Stukalov
Head of IDEs Division, JetBrainsGetting started
Install the Cursor ACP directly in the JetBrains AI chat, and authenticate with your existing Cursor account. The Cursor ACP is free for all users on paid plans. Learn more in the docs.
What's next
The Cursor ACP is a foundation for deeper integrations with JetBrains. We're excited to bring agentic coding capabilities to more developers.
Original source Report a problem - Mar 3, 2026
- Date parsed from source:Mar 3, 2026
- First seen by Releasebot:Mar 4, 2026
2.6
This release adds interactive UIs in agent chats, lets teams share private plugins, and tightens core features like Debug mode. MCP Apps bring charts, diagrams, and whiteboards inside Cursor, and admins can run team marketplaces for centralized plugin governance.
Release Highlights
This release introduces interactive UIs in agent chats, a way for teams to share private plugins, and improvements to core capabilities like Debug mode.
MCP Apps
MCP Apps support interactive user interfaces like charts from Amplitude, diagrams from Figma, and whiteboards from tldraw directly inside Cursor.
Team marketplaces for plugins
On Teams and Enterprise plans, Admins can now create team marketplaces to share private plugins internally. Go to the settings page to manage and distribute plugins with central governance and access controls.
Desktop Improvements (15)
Desktop Bug Fixes (10)
Web Improvements & Fixes (2)
Original source Report a problem - Feb 26, 2026
- Date parsed from source:Feb 26, 2026
- First seen by Releasebot:Feb 26, 2026
Closing the code review loop with Bugbot Autofix
Bugbot Autofix is now generally available, letting cloud agents auto test and fix issues in PRs to speed up code review. It boosts accuracy with more bugs caught and a higher merge rate, and users can enable Autofix from the Bugbot dashboard.
Bugbot Autofix updates
Agents are now tackling more ambitious tasks, generating thousands of lines of code, and controlling their own computers to demo their work. Today, we're extending these capabilities to Bugbot, our code review agent.
Bugbot can now find and automatically fix issues in PRs. Bugbot Autofix spawns cloud agents that work independently in their own virtual machines to test your software. Over 35% of Bugbot Autofix changes are merged into the base PR.
Autofix is now out of beta and available to all Bugbot users. Once enabled, the PRs Bugbot reviews will include proposed fixes to give you a jumpstart on code review.
Resolving more bugs per PR
We’ve continued to invest in Bugbot’s effectiveness at identifying issues while optimizing for bugs that get fixed.
The average number of issues identified per run has nearly doubled in the last six months, while the resolution rate (i.e., percentage of bugs resolved by users before the PR is merged) has increased from 52% to 76%. This means Bugbot is catching more bugs and flagging fewer false positives.What's next
Bugbot Autofix is an early example of agents running automatically based on an event like PR creation. Next, we are working on giving teams the ability to configure custom automations for workflows beyond code review.
Original source Report a problem
We’re also focused on enabling Bugbot to verify its own findings, conduct deep research on complex issues, and continuously scan your codebase to catch and resolve bugs.
Get started by enabling Bugbot Autofix in your Bugbot dashboard. Or learn more in our docs. - Feb 26, 2026
- Date parsed from source:Feb 26, 2026
- First seen by Releasebot:Feb 26, 2026
The third era of AI software development
Cursor reveals a new era of AI that runs autonomous cloud agents to tackle large tasks, turning developers into managers of a software factory. Cloud agents enable parallel work with logs and previews, redefining speed and collaboration. This marks the Cursor cloud agents launch.
When we started building Cursor a few years ago, most code was written one keystroke at a time. Tab autocomplete changed that and opened the first era of AI-assisted coding. Then agents arrived, and developers shifted to directing agents through synchronous prompt-and-response loops. That was the second era. Now a third era is arriving. It is defined by agents that can tackle larger tasks independently, over longer timescales, with less human direction. As a result, Cursor is no longer primarily about writing code. It is about helping developers build the factory that creates their software. This factory is made up of fleets of agents that they interact with as teammates: providing initial direction, equipping them with the tools to work independently, and reviewing their work. Many of us at Cursor are already working this way. More than one-third of the PRs we merge are now created by agents that run on their own computers in the cloud. A year from now, we think the vast majority of development work will be done by these kinds of agents.
From Tab to agents
Tab excelled at identifying where low-entropy, repetitive work could be automated. For nearly two years, it produced significant leverage. Then the models improved. Agents could hold more context, use more tools, and execute longer sequences of actions. Developer habits began to shift, slowly through the summer, then rapidly over the last few months with the releases of Opus 4.6, Codex 5.3, and Composer 1.5. The transformation has been so complete that today, most Cursor users never touch the tab key. In March 2025, we had roughly 2.5x as many Tab users as agent users. Now, that is flipped: we now have 2x as many agent users as Tab users.
Cloud agents and artifacts
Compared to Tab, synchronous agents work further up the stack. They handle tasks that require context and judgment, but still keep the developer in the loop at every step. But this form of real-time interaction, combined with the fact that synchronous agents compete for resources on the local machine, means it is only practical to work with a few at a time. Cloud agents remove both constraints. Each runs on its own virtual machine, allowing a developer to hand off a task and move on to something else. The agent works through it over hours, iterating and testing until it is confident in the output, and returns with something quickly reviewable: logs, video recordings, and live previews rather than diffs. This makes running agents in parallel practical, because artifacts and previews give you enough context to evaluate output without reconstructing each session from scratch. The human role shifts from guiding each line of code to defining the problem and setting review criteria.
The shift is underway inside Cursor
Thirty-five percent of the PRs we merge internally at Cursor are now created by agents operating autonomously in cloud VMs. We see the developers adopting this new way of working as characterized by three traits:
- Agents write almost 100% of their code.
- They spend their time breaking down problems, reviewing artifacts, and giving feedback.
- They spin up multiple agents simultaneously instead of handholding one to completion.
There is a lot of work left before this approach becomes standard in software development. At industrial scale, a flaky test or broken environment that a single developer can work around turns into a failure that interrupts every agent run. More broadly, we still need to make sure agents can operate as effectively as possible, with full access to tools and context they need. We think yesterday's launch of Cursor cloud agents is an initial but important step in that direction.
Original source Report a problem - Feb 26, 2026
- Date parsed from source:Feb 26, 2026
- First seen by Releasebot:Feb 26, 2026
Bugbot Autofix
Bugbot Autofix
Bugbot can automatically fix issues it finds in pull requests.
Autofix runs cloud agents on their own machines to test changes and propose fixes directly on your PR. Today, over 35% of Bugbot Autofix changes are merged into the base PR.
Bugbot will post a comment on the original PR with a preview of the autofix changes, which you can merge using the provided @cursor command. If you'd like, you can instead configure autofix to push changes directly to your branch with no interaction required.
To enable autofix, head over to your Bugbot dashboard.
Read more in our announcement.
Original source Report a problem - Feb 24, 2026
- Date parsed from source:Feb 24, 2026
- First seen by Releasebot:Feb 25, 2026
Cursor agents can now control their own computers
Cursor unveils a new cloud agents version accessible from web, mobile, Slack, and GitHub. Agents run in isolated VMs, auto-create PRs with artifacts, and let you edit remotely, signaling a major shift toward autonomous end-to-end code delivery.
The next level of autonomy
Over the last few months, we addressed this internally by giving agents their own virtual machines with full development environments, and the ability to test their changes and produce artifacts (videos, screenshots, and logs) so you can quickly validate their work.
Today we're making a new version of Cursor cloud agents available from anywhere you work, including the web, mobile, desktop app, Slack, and GitHub. Cloud agents onboard themselves onto your codebase and produce merge-ready PRs with artifacts to demo their changes. You can also control the agent's remote desktop to use the modified software and make edits yourself, without checking out the branch locally.
This has been the biggest shift in how we build software since the move from Tab autocomplete to working synchronously with agents. More than 30% of the PRs we merge at Cursor are now created by agents operating autonomously in cloud sandboxes.
Local agents make it easy to start generating code, but they quickly run into conflicts and compete with each other (and with you) for your computer's resources. Cloud agents remove this constraint by giving each agent an isolated VM, so you can run many in parallel.
Cloud agents can also build and interact with software directly in their own sandbox, allowing them to iterate until they've validated their output rather than handing off the first attempt. The video below shows a proof-of-concept from our earlier research on enhanced computer use.
You can see the agent navigate web pages in the browser, manipulate tools like spreadsheets, interpret data and make decisions, and resolve issues in complex UI environments.
Using cloud agents at Cursor
For the last month, we’ve been using cloud agents internally, and it has changed how we build software. Instead of breaking tasks into small chunks and micro-managing agents, we delegate more ambitious tasks and let them run on their own.
These are a few ways we’re using cloud agents:
Building new features
We used cloud agents to help us build plugins, which we recently launched on the Cursor Marketplace. Here is one of our prompts:
For each component displayed in a given plugin's page, we'd like to include a link to the source code. For skills, commands, rules, and subagents - that's the .md file. For hooks, it's the hooks.json. For mcps, it's the .mcp.json or the manifest where it's defined. As we index all the components of a plugin, keep track of the source file and construct links to that file by way of the underlying github url. Surface this to the frontend and have our frontend link out to github using this icon. Test w/ https://github.com/prisma/cursor-plugin locally
The agent implemented the feature, then recorded itself navigating to the imported Prisma plugin and clicking each component to verify the GitHub links.
For local testing, the agent temporarily bypassed the feature flag gating the marketplace page, then reverted before pushing. It rebased onto main, resolved merge conflicts, and squashed to a single commit.
Reproducing vulnerabilities
We kicked off a cloud agent from Slack with the prompt, "Please triage and explain this vulnerability to me in great detail," followed by a description of a clipboard exfiltration vulnerability. When the agent finished running, it responded in the Slack thread with a summary of what it accomplished.
The agent built an HTML page that exploits the vulnerability via an exposed API. It started a backend server to host the demo page locally and loaded the page in Cursor’s in-app browser.
The video artifact shows the complete attack flow: the agent copied a test UUID to the system clipboard, loaded the demo page in Cursor's browser, and clicked a button to exfiltrate and display the UUID. It also took a screenshot showing the successful clipboard theft and committed the demo HTML file to the repo.
Handling quick fixes
We asked a cloud agent to replace the static "Read lints" label with a dynamic one driven by lint results. It implemented "No linter errors" for zero diagnostics and "Found N errors" for N diagnostics, with styling to match existing CSS.
The agent tested two cases in the Cursor desktop app: a file with multiple type errors and a clean file with no errors. The video artifact shows the agent verifying that the clean file has an expanded group that shows “No linter errors.”
Testing UI
We spun up a cloud agent to check that everything works correctly at cursor.com/docs. It spent 45 minutes doing a full walkthrough of our docs site. The agent provided a summary of all the features it tested, including the sidebar, top navigation, search, copy page button, share feedback dialog, table of contents, and theme switching.
Now that agents can handle most of the implementation, we’ve found that the role of a developer is more about setting direction and deciding what ships.
What's next
We’re building toward a future of self-driving codebases, where agents merge PRs, manage rollouts, and monitor production. We will go from a world where developers use agents to create diffs to one where agents ship tested features end-to-end.
To fully realize that shift will require improving tooling, models, and the interaction patterns. Our near-term focus is on coordinating work across many agents and building models that learn from past runs and become more effective as they accumulate experience.
Get started at cursor.com/onboard to watch the agent configure itself and record a demo. Or learn more in the docs.
Original source Report a problem - Feb 24, 2026
- Date parsed from source:Feb 24, 2026
- First seen by Releasebot:Feb 25, 2026
Cloud Agents with Computer Use
Cloud agents
Cloud agents can now use the software they create to test changes and demo their work.
After onboarding onto your codebase, each agent runs in its own isolated VM with a full development environment. Cloud agents produce merge-ready PRs with artifacts (videos, screenshots, and logs) that make it possible to quickly review their changes.
Cloud agents are available anywhere you use Cursor, including web, desktop, mobile, Slack, and GitHub.
Get started at cursor.com/onboard to watch the agent configure itself and record a demo. Or read more in our announcement.
Original source Report a problem