Crogl, Inc.’s cover photo
Crogl, Inc.

Crogl, Inc.

Computer and Network Security

Autonomous Knowledge Engine for Security Operations

About us

Crogl is the only autonomous knowledge engine for security operations. It investigates every alert and executes threat hunts by continuously learning your processes with speed, consistency, and depth. Our mission: Enable every security analyst be effective as the entire team. What can Crogl do for you? Autonomous Alert Investigations: Our AI system handles the triage and investigation of every alert, requiring no pre-written playbooks. Threat Hunting: Auto-threat hunt with simple instructions or connect Crogl to your threat intel platform to auto execute intel report based threat hunts. Re-analyze old alerts: Got new data or not sure of response quality? Crogl can re-analyze old alerts with depth and completeness. Integration with Existing Tools: Crogl integrates smoothly with your current security ecosystem, ensuring compatibility and ease of use. Crogl serves: -Sophisticated security teams who want deep and consistent response -Managed security service providers who want transparency and repeatability -Small security teams with limited time and too many things to do Get started Crogl installs easily in your private cloud or on-prem. Contact sales@crogl.com www.crogl.com

Website
https://www.crogl.com
Industry
Computer and Network Security
Company size
11-50 employees
Type
Privately Held
Founded
2023
Specialties
Alert Triage, Investigations, Security Analytics, Detection Engineering, Incident Response, SOC, Incident Response, Automation, Playbook, SOAR, SIEM, Threat Hunt, Modeling, LLM, IT Security, Cloud, Data Engineering, Schema Discovery, Analytics, and Query

Employees at Crogl, Inc.

Updates

  • AI SOC agents are getting a lot of attention right now. They query your SIEM. They isolate endpoints. They rotate credentials. But here's a question most vendors aren't asking: where did the agent get the credentials to do that? Every useful agent needs access. Access requires secrets. At that point, you don't just have an AI assistant. You have a secrets-handling system, whether you designed it that way or not. That's not theoretical. It maps directly to known OWASP risks: sensitive data exposure, prompt injection, excessive agency, insecure tooling. MCP-style architectures help. But they shift the problem, they don't eliminate it. The risk moves from exposure to invocation. The question stops being "who can see the secret?" and becomes "who can trigger its use?" Our take on the right answer: the LLM should never see the secret in the first place. Check out our latest blog from Lipyeow Lim and Dominic Salas. Link in the comments.

  • "I've got Claude. I've got a weekend. How hard can it be?" Our principal engineer Alec Kloss hears this every time someone watches a purpose-built AI harness work through a real investigation. Here's what the research says: the same model, wrapped in different harnesses, produces up to a 6x performance gap on identical benchmarks. Not a different model. Same weights, same training. Different scaffolding. The model is table stakes. Everyone has the same frontier models. The differentiation is in what surrounds them. Context management. Memory. Orchestration logic that knows which log sources matter at which stage of an investigation. You can build a harness that works in a demo. The gap between that and one that handles real investigations reliably? That's the 6x. In security, that's the difference between catching the intrusion and writing the breach report. Alec's full breakdown is in the comments.

  • 50% of SOC teams say integrating AI into workflows is their biggest barrier. 49% say their data is too dispersed to normalize. Those aren't two problems. They're the same problem from two angles. Traditional approaches require you to normalize your data before AI can use it. That project takes months, requires dedicated engineering, and starts breaking the moment a new data source comes online. Meanwhile, 61% of practitioners are concerned about third-party AI vendors using their security data to enrich AI services. Security data reveals infrastructure topology, detection logic, response playbooks, and the specific vulnerabilities you're most exposed to. 45% of SOC environments in our study are air-gapped. For those teams, cloud-hosted AI isn't a preference question. It's off the table entirely. The real constraint isn't "is the AI good enough." It's can you deploy AI where your data already lives, without moving it, normalizing it, or handing it to a third party? Check out the full report with five actionable findings from 649 practitioners. Link in comments.

    • No alternative text description for this image
  • Jon Oltsik published his 12 industry takeaways from RSAC 2026. One stood out to us. He drew a line between vendors bolting AI onto legacy tools and the startups building AI-native from the ground up. Data foundation. Context engine. Execution layer. Guardrails. Then agents on top. He named Crogl, Inc. in that second camp. We think this distinction matters more than any product category label. The vendors racing to add an "AI" prefix to their existing stack are buying themselves a year, maybe. The architectures that start with how data actually lives in a SOC environment will be the ones still standing when the hype clears. That tracks with what we hear from practitioners every day. Security teams don't need another copilot. They need agents that learn their schemas, work their tickets, and investigate alerts where the data already lives. No normalization. No playbook engineering. No data leaving. Read Jon's full article here: https://lnkd.in/gFxAqGKx

  • The attacker's AI agent doesn't pause to ask permission. That's the shift. Adversaries now task AI agents to run sophisticated campaigns at near-zero cost. No human approval loop. No waiting. The attack velocity has fundamentally changed. So the defender's side has to change too. monzy merza joined Dejan Kosutic on Secure and Simple to break down what "agent versus agent" actually means for security teams: → Why the window defenders have to respond is compressing fast → How AI SOC agents pull from multiple data sources, enrich context, and deliver completed investigations automatically → Where humans still need to be in the loop, and where they don't → How organizations build trust in AI through phased, measurable adoption The fight isn't human versus human anymore. Your SOC needs to operate at the same speed as the threat. 🎧 Listen to the full episode on YouTube, Spotify, and Apple Podcasts. Link in comments.

  • Most SOC teams already know the truth. Now there is a number to go with it. Nearly 40% of enterprise security alerts go completely uninvestigated. Not misclassified. Not deprioritized. Missed entirely. That finding comes from the Crogl State of the AI SOC report, surveying 600+ organizations. At RSA Conference 2026, monzy merza from Crogl, Inc. sat down with Sean Martin and Marco Ciappelli Ciappelli of Studio C60 / ITSPmagazine to talk about what changes when you stop trying to normalize your data and start investigating every alert instead. What they covered: - Why data normalization is the wrong starting point for most SOC modernization efforts - How Crogl runs full investigations across fragmented data sources without moving or copying data - Why AI in the SOC means more security jobs, not fewer - What CISOs should be doing right now to close the investigation gap If your team is buried in alert backlogs, managing multiple SIEMs and data lakes, or trying to figure out where AI fits without giving up control of your environment, this conversation is worth 30 minutes ⬇️ Watch the video: https://lnkd.in/gtcgB7X6 Listen to the podcast: https://lnkd.in/gw4uMcG7 Learn more about us: https://www.crogl.com #SecurityOperations #RSAC2026 #AlertFatigue #CyberSecurity #SOC #CISO #ThreatIntelligence

  • Crogl, Inc. reposted this

    monzy merza didn't start Crogl, Inc. because he saw a market opportunity. He started it because he sat on a customer call, heard something he'd been hearing for years: "we're never going to put all our data in one place" and realized the entire industry had been ignoring it. But instead of writing a pitch deck, he did something I've never seen a founder do. He called HSBC and asked for a job. Not as a leader. Not as an executive. He wanted to sit in the chair of the person he was going to build for - a security analyst buried in 400 alerts a day and feel what that work actually feels like. Here's what he found: The problem isn't alert volume. It's three things nobody talks about: Domain knowledge gaps, tool competency gaps, and collaboration friction between analysts. Every vendor for the last decade has said "too many alerts, not enough analysts." The real blockers are deeper than that. 399 out of 400 alerts don't matter. But the one you miss ends careers. The signal of almost every breach was already there. It just wasn't attended to. That one miss leads to breaches, regulatory fines, the CISO gets fired, and the whole cycle starts again. The easy problems are already solved. The valuable ones are boring. Monzy's take: the problems worth solving now are unsexy, uninteresting, and require getting dirty. That's exactly why nobody has solved them yet. Passion isn't excitement. It's commitment. His definition stuck with me - passion is not what you're excited about today. It's what you're willing to sacrifice for over a very long period of time to serve a community. That's the real founder-market fit. Oh, and before all of this - he spent 12 years in a nuclear weapons lab. Full episode out now! #InsideTheSiliconMind #Cybersecurity #Founders #AI

  • What a great conversation between monzy merza and Firas Sozan. Take a quick listen 🎧

    Most founders are chasing the wrong problems. We’re conditioned to look for:  - big ideas  - exciting markets  - obvious opportunities But the reality is: The most valuable problems are often the ones nobody wants to touch. On this week's Inside the Silicon Mind episode with monzy merza (Co-Founder & CEO at Crogl, Inc.), something stood out: The difference between hearing customers… and actually understanding them. It’s subtle - but it’s where most companies go wrong. We also unpack:  - what founder-market fit really means  - why experience still matters (more than people admit)  - and how real insight turns into real companies Episode drops tomorrow.

    • No alternative text description for this image

Similar pages

Browse jobs

Funding

Crogl, Inc. 2 total rounds

Last Round

Series A

US$ 25.0M

See more info on crunchbase