Inspiration

We were inspired to build Sentinel because enterprises are moving rapidly into the era of autonomous agents powered by OpenAI models and the AGI SDK, but they do not have reliable safeguards around what these agents can actually do. A single hallucinated API call from an OpenAI-powered system can delete customer data, send money to the wrong place, or escalate privileges. We realized that traditional prompting strategies cannot guarantee safety, which pushed us to design a true middleware layer that stands between the agent and the real world. We wanted to use sponsors like Sentry and its rich observability tooling to detect instability early and guide safer agent behavior, while using Telnyx AI Voice to involve humans when needed.

Our vision came from treating Sentinel as the conscience of the agent, similar to a psychological Super-Ego. We designed it using OpenAI and the AGI SDK for model reasoning, Sentry for real-time stability and tracing, Telnyx for conversational authorization, and Lovable to help us quickly build the dashboard. Together, these sponsors shaped the core architecture. Sentinel is not a patch or a plugin. It is a safety operating system that uses OpenAI reasoning but keeps guardrails enforced through Sentry metrics and Telnyx human verification.

What it does

Sentinel intercepts every action generated by an AGI model running through the AGI SDK and classifies the action as safe or high risk. Safe actions such as data reads pass through immediately. High-risk actions like payments, deletions, or exports are blocked until they undergo a multi-step review. The first review uses Sentry. By calling the Sentry Stats API and evaluating crash-free session rates, Sentinel ensures that no AGI-powered workflow proceeds while backend systems are unstable. This transforms Sentry observability from monitoring into active enforcement.

If an action fails a policy check, Sentinel sends the request to the human approval flow powered by Telnyx AI Voice. Through Telnyx TexML and Telnyx function-calling, the voice agent can explain the issue to the human and provide a guided authorization flow. On the frontend, the UI scaffolding generated by Lovable gives a clean, mission-control-like view of Sentry health, AGI reasoning traces, and AGI SDK tool calls. All of these sponsor technologies combine to create a seamless loop between AI autonomy, Sentry safety checks, Telnyx reaches out for admin approval and human oversight, and interactive dashboard displays the risk and reasoning and analysis of Sentinel as well as logs.

How we built it

The core agent layer uses the AGI SDK with AGI models, but we intentionally reject open-ended tool access. Instead, we built a strict proxy called SentinelGateway that validates every tool call using typed schemas. This ensures that every payload generated by the AGI model is structured, predictable, and compliant with risk requirements. We also rely heavily on Sentry distributed tracing. Each call through the AGI SDK creates Sentry transactions and spans, which are enriched with metadata about risk scores and violated business rules. These Sentry traces help engineers see exactly how an AGI agent reached its decision.

For the human-in-the-loop flow, we built our approvals using Telnyx AI Voice. The AGI SDK triggers Telnyx calls for high-risk actions, and Telnyx webhooks send back signed responses to transition the Sentinel FSM. We also injected Sentry trace data into the Telnyx TexML script, which allows the Telnyx AI Assistant to pull real context into the conversation. Finally, Lovable helped us spin up the frontend quickly. We extended it with custom React logic to display Sentry health, Telnyx call states, and AGI task logs in real time. The result is a fully integrated stack powered by sponsor technologies at every layer.

Challenges we ran into

One major challenge was multi-hop latency. Each action required OpenAI model inference through the AGI SDK, followed by Sentry Stats API checks and Telnyx call setup. This created moments of silence that made users think the system was stuck. We solved this using a Lovable-based frontend that streams immediate progress logs while waiting for the backend. These logs reflected the Sentry connection checks and Telnyx call initialization so the user always felt engaged. This challenge taught us how essential sponsor technologies like Sentry and Telnyx are in building responsive AI systems.

Another challenge was the Telnyx AI model not having context. Because Telnyx AI Voice runs in a different environment, it initially had no knowledge of why Sentinel blocked an action. Without context, it hallucinated generic explanations. Our fix was to inject Sentry trace metadata into the Telnyx TexML instructions. This allowed the Telnyx model to reference real Sentry metrics and logs during the call. We also had to prevent the OpenAI model from hallucinating a successful result before the human approved it. We solved this using a blocking interceptor in the AGI SDK tool logic that forces the OpenAI agent to wait for the Telnyx webhook.

Accomplishments that we're proud of

One of our biggest accomplishments is building true human-in-the-loop security by combining OpenAI, Sentry, and Telnyx AI Voice. A human can ask the Telnyx agent questions like “Why was this blocked?” and the model will answer using Sentry trace data. This creates a transparent and auditable safety flow. The AGI SDK helped us enforce strong structural constraints, ensuring the OpenAI model only generates actions within safe bounds.

Another major accomplishment is reimagining how Sentry is used. Instead of treating Sentry as a debugging tool, we turned it into a runtime policy engine. Sentinel depends on Sentry Stats API results to allow or deny AGI actions. We also observed how seamlessly we could build clean UI experiences with Lovable, combining it with real-time Sentry logs and Telnyx states. This product would not exist without the deep technical contributions of all our sponsor tools.

What we learned

We learned that using AGI with the AGI SDK requires deep understanding of failure modes. LLMs often hallucinate confident but unsafe actions, so architecture must be designed with strict schema enforcement. By coupling AGI reasoning with Sentry safety checks and Telnyx approval voice calls, we saw how important sponsor tools are in building real enterprise AI systems. We also learned how to enforce deterministic decision-making by forcing AGI models into Pydantic-valid schemas.

Working with Telnyx AI Voice taught us how complex voice AI actually is, especially when mixed with Sentry trace injection. We gained experience in function calling, webhook handling, and real-time context feeding. On the UI side, Lovable helped us move quickly as it gave us a good template for us to make modifications to. Combined with Sentry’s distributed tracing, we built a transparent system where engineers can see every model decision. This blend of sponsors shaped our understanding of how safety and autonomy must coexist.

What's next for Sentinel

Our next steps include deepening our integration with Sentry to support predictive safety. We want Sentinel to use Sentry time-series metrics to identify instability before it occurs and proactively pause OpenAI agents. We also plan to expand Telnyx conversational flows so that approval calls feel more like interactive discussions. On the agent side, we aim to integrate more advanced OpenAI reasoning patterns using the AGI SDK to create stricter, more controllable tool schemas. The frontend will continue to evolve with Lovable as our base.

The most ambitious roadmap item is building self-healing automations powered by Sentry AI Autofix. Sentinel will use Sentry trace history and AGI SDK schemas to repair unsafe OpenAI actions instead of simply blocking them. For example, if an invalid Vendor ID is detected, Sentinel will query Sentry history and reconstruct the correct payload. Then, the Telnyx AI Voice system will ask the human: “Do you want to approve the corrected version that Sentinel created?” This creates a closed-loop workflow powered by Sentry, Telnyx, OpenAI, AGI SDK, and Lovable. It moves Sentinel from a passive gatekeeper to an active problem solver.

Built With

  • api
  • context-injection
  • distributed
  • error-monitoring
  • fastapi
  • github
  • groq
  • lovable-react-components
  • lovable-ui-generator
  • openai-models
  • performance
  • postcss
  • pydantic-v2
  • pydantic-v2-tool-schemas
  • python
  • react
  • sentry
  • stats
  • telnyx
  • telnyx-ai-voice
  • telnyx-function-calling
  • telnyx-real-time-voice-assistant
  • telnyx-webhook-validation
  • transactions
  • typescript
  • vite
Share this project:

Updates