See our final video above, and check out our project slides as well!
Atlassian’s belief is that teamwork begins with trust. With Verity, you can now guarantee that trust.
Inspiration
As Atlassian introduces more AI agents into tools like Jira and Confluence, we noticed a growing problem: teams can’t always see what those agents are doing or prove that their outputs can be trusted. Agents now summarize tickets, comment on issues, and automate workflows, but there’s no single place to review their actions, who owns them, or whether they’re behaving as expected.
We built Verity because we believe Atlassian’s products should hold AI to the same standard of visibility and accountability that they already bring to human work.
What it does
Verity gives teams a clear, verifiable record of what their AI agents do across Atlassian tools.
Each time an agent takes an action, such as writing a summary or updating a ticket, Verity records the action’s inputs, outputs, and timestamp. It creates a secure digital fingerprint (a cryptographic hash), stores the full record off-chain, and saves a proof of that record on a small blockchain contract for transparency.
Inside Jira, the Forge app displays a timeline showing every recorded action, who owns the agent, what model it is using, and whether the output can be verified.
It is a simple way to see and prove what your AI is doing.
How we built it
We combined three main parts:
- Smart Contract (Solidity / Hardhat): Stores each action’s hash, content ID, and timestamp, making it tamper-evident.
- Backend (FastAPI): Packages the agent data into canonical JSON, computes the hash, saves the record in an IPFS-style store, and connects to the contract through Web3.py.
- Jira Forge App: Displays agent timelines and verification results directly inside Jira. We used the
jira:issuePanelandglobalPagemodules with external fetch permissions to connect to the backend.
Together, these components let a user record and verify an AI action in just a few seconds without leaving Jira.
Challenges we ran into
- Making blockchain calls work securely within Forge’s sandboxed environment.
- Ensuring that every JSON record produced a valid hash every time, since even small formatting differences caused mismatches.
- Keeping the system fast enough to feel instant despite blockchain transaction delays.
Accomplishments that we're proud of
We believed we solved the problem of trust underlying every agentic workflow through a few key ways. These included:
- Creating a Jira interface where users can verify actions without touching blockchain code.
- Achieving verification from action creation to on-chain proof in under five seconds.
- Proved that we can add trust and auditability to Atlassian’s AI existing workflows in a way that didn't disrupt them or require them to be significantly restructured.
What we learned
- How to design around Forge’s architecture and security model while still enabling external integrations.
- We realized that a lack of trust and accountability in AI is one of the greatest barriers to its adoption. We took actionable steps to solve that with this project.
- How to connect emerging technologies like blockchain to everyday enterprise tools. This definitely reduced the barrier of entry for us for future blockchain projects!
What's next for Verity
- Deploy Verity on a public testnet such as Base Sepolia so verification can happen outside the local environment.
- Add agent reputation scoring in Jira based on reliability and compliance.
- Integrate with Atlassian Intelligence and Forge Triggers to automatically log AI activity.
- Expand to Confluence so AI-generated documentation can have verifiable edit trails.

Log in or sign up for Devpost to join the conversation.