hoop.dev’s cover photo
hoop.dev

hoop.dev

Data Security Software Products

Boston, MA 1,406 followers

The Gateway for AI Acceleration: Layer 7 Control for Humans and Agents

About us

hoop.dev helps customers safely accelerate AI adoption and engineering velocity. The hoop.dev gateway enforces inline data masking, guardrails, and approvals so every interaction is secure, audited, and fast.

Website
https://hoop.dev/
Industry
Data Security Software Products
Company size
11-50 employees
Headquarters
Boston, MA
Type
Privately Held
Specialties
Zero-Configuration Data Masking, Granular Access Controls, Automated Reviews, Infrastructure Access, and AI Agent Security

Locations

Employees at hoop.dev

Updates

  • Approval queues are the silent productivity tax of access governance. Five minutes per request, across a team, across a week. That's the tax. The AI Session Analyzer changes the math. Every flagged command arrives with the LLM's reasoning already attached: what it does, why it might be risky, what scope it touches. Reviewers stop investigating. They start deciding. → Faster reviews. Context comes pre-loaded. → Better consistency. Same command, same reasoning, every time. → Audit trails that explain themselves. Six months later, you still know why. Open source under MIT license. And free for small teams. Read more: https://lnkd.in/dpC7YQWJ

    • No alternative text description for this image
  • 🥇 Congratulations to 360Learning for their exceptional performance in the LMS category of the eLearningIndustry people's choice awards. The companies that win on customer trust are also the ones absorbing the most operational complexity behind the scenes. Multi-region data, regulated buyers, AI features hitting production systems, customer-data residency. None of it is glamorous. All of it is what separates "ships fast" from "ships fast and gets to keep their customers." 360Learning lives that distinction, and we are proud to support them.

  • 🚨 Agents move too fast for reviewers, so we have AI judge AI AI agents today work at machine speed. They make important changes without any oversight. They can write SQL, run shell commands, and change infrastructure quickly. They do this faster than any human reviewer can keep up. We just shipped the AI Session Analyzer. It runs every command in Hoop using your selected LLM. This can be OpenAI, Anthropic, Azure, or a custom option. Then, it executes the command. Three risk levels. Three policy actions. The dangerous ones never reach production. Why this matters: → Static rules miss intent. Regex blocks DROP TABLE users but lets WITH t AS (SELECT * FROM users) DELETE FROM t WHERE 1=1 through. The Session Analyzer reads what the command is trying to do. → Approvals get smarter. Medium-risk commands route to your reviewers with the LLM's reasoning attached. Faster decisions, better context. → Every classification is auditable. Risk level, title, explanation, and action taken are persisted on every session. Open source. MIT. Free for small teams. Read the full breakdown: https://lnkd.in/dpC7YQWJ

    • No alternative text description for this image
  • OWASP GenAI Security Project published 16 data security risks for Generative AI systems. We mapped 11 of them to the hoop.dev gateway. The pattern is clear: most AI data risks trace back to what humans and non-human identities can access in production. The fix isn't restricting AI context. It's controlling what data leaves your infrastructure in the first place. Here's a 4-page technical guide breaking down: → 4 risks where hoop.dev is the direct solution → 4 where we're part of the solution → 3 where we contribute to the solution stack If your team is running #ClaudeCode, #Copilot, or any AI agent against production systems, this is the way to start thinking about data security. #OWASP #CISO #SRE #DataSecurity #AIGovernance #AIAgents

  • Most teams we talk to want to use Claude Code in with the full context of live infrastructure, but none of them can answer what happens to their data once Claude Code is deployed. Every action Claude Code takes moves data through your infrastructure. Without protocol level controls, you have no say in what data is exposed, what actions get logged, or what information reaches the model. We're hosting a live session to show you how to change that. Claude Code in Production: Changing the Risk of Data Access & Movement - Why current access controls weren't built for AI coding agents - How protocol-layer interception changes the risk profile in real time - Live demo: Claude Code querying production data and remediating an incident with hoop.dev quietly patrolling in the background The agent gets what it needs. The data that should never move, doesn't.

  • Your session recording tool saw exactly what the AI agent did and it couldn't stop any of it. This is the visibility vs. control problem. It's now the most important gap in AI security. Ephemeral credentials solve standing access. Session logs give you a transcript. But neither of them answers the question regulators are now asking: "Show me that you can stop it, not just that you logged it when it did." Forensic evidence is not preventive, detection does not guarantee protection, and an audit trail is not proof of control. AI agents don't wait in approval queues. They execute inside sessions your tools already authorized, with credentials your policies already issued. The perimeter is not identity-bound access anymore, it's what happens during live access. hoop.dev is the only way to extend real control into AI sessions. Command-level approve/deny control. Dynamic data masking. Guardrails that automatically block dangerous actions. AI security that governs what executes. That's hoop.dev. Learn more in the docs, then get started today: https://lnkd.in/ehJXyBjg

Similar pages

Browse jobs

Funding

hoop.dev 3 total rounds

Last Round

Pre seed
See more info on crunchbase