Skip to content

deeqyaqub1-cmd/hyperstack-core

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

hyperstack-core

The Agent Provenance Graph for AI agents — the only memory layer where agents can prove what they knew, trace why they knew it, and coordinate without an LLM in the loop. $0 per operation at any scale.

Timestamped facts. Auditable decisions. Deterministic trust. Build agents you can trust at $0/operation.

Current version: v1.5.2

npm i hyperstack-core
npx hyperstack-core init openclaw-multiagent

The Problem

AI agent setups use markdown files for coordination:

# DECISIONS.md (append-only)
- 2026-02-15: Use Clerk for auth (coder-agent)
- 2026-02-16: Migration blocks production deploy (ops-agent)

Try answering: "What breaks if auth changes?"

With markdown: grep -r "auth" *.md — manual, fragile, returns text blobs.

The Solution

import { HyperStackClient } from "hyperstack-core";

const hs = new HyperStackClient({ apiKey: "hs_..." });

// Store a decision with typed relations
await hs.store({
  slug: "use-clerk",
  title: "Use Clerk for auth",
  body: "Better DX, lower cost, native Next.js support",
  cardType: "decision",
  links: [{ target: "auth-api", relation: "affects" }],
});

// Store a blocker
await hs.store({
  slug: "migration-23",
  title: "Auth migration to Clerk",
  cardType: "task",
  links: [{ target: "deploy-prod", relation: "blocks" }],
});

// What blocks deploy?
const result = await hs.blockers("deploy-prod");
// → { blockers: [{ slug: "migration-23", title: "Auth migration to Clerk" }] }

// What breaks if auth changes?
const impact = await hs.impact("use-clerk");
// → [auth-api, deploy-prod, billing-v2]

// Batch store
await hs.bulkStore([{ slug: "p1", title: "Project A", body: "..." }, { slug: "p2", title: "Project B", body: "..." }]);

// Agentic routing (deterministic, no LLM)
const canDo = await hs.can({ query: "what breaks if auth changes?", slug: "auth-api" });
const steps = await hs.plan({ goal: "add 2FA to auth-api" });

// Parse markdown/logs into cards (CLI: npx hyperstack-core ingest)
await hs.parse("# DECISIONS.md content", { source: "decisions" });

// Ingest conversation transcript
await hs.autoRemember("Alice is senior engineer. We decided Clerk for auth.");

// Memory hub: working / semantic / episodic
const cards = await hs.hsMemory({ surface: "semantic" });

Typed relations, not text blobs. task→blocks→deploy is queryable. A paragraph in DECISIONS.md is not.

REST API: Always use X-API-Key, never Authorization: Bearer:

curl -X POST "https://hyperstack-cloud.vercel.app/api/cards" \
  -H "X-API-Key: hs_your_key" \
  -H "Content-Type: application/json" \
  -d '{"slug":"use-clerk","title":"Use Clerk","body":"Auth decision"}'

MCP — Works in Cursor, Claude Desktop, VS Code, Windsurf

{
  "mcpServers": {
    "hyperstack": {
      "command": "npx",
      "args": ["-y", "hyperstack-mcp"],
      "env": {
        "HYPERSTACK_API_KEY": "hs_your_key",
        "HYPERSTACK_WORKSPACE": "my-project",
        "HYPERSTACK_AGENT_SLUG": "cursor-agent"
      }
    }
  }
}

10 MCP tools: hs_store, hs_search, hs_get, hs_list, hs_graph, hs_blockers, hs_impact, hs_feedback, hs_fork, hs_identify


The Memory Hub — Three Memory Surfaces

Same typed graph, three distinct APIs with different retention behaviour.

Episodic Memory — what happened and when

GET /api/cards?workspace=X&memoryType=episodic
  • Event traces, agent actions, session history
  • 30-day soft decay curve (agent-used cards decay at half rate)
  • Returns decayScore, daysSinceCreated, isStale per card

Semantic Memory — facts that never age

GET /api/cards?workspace=X&memoryType=semantic
  • Decisions, people, projects, workflows, preferences
  • Permanent — no decay, no expiry
  • Returns confidence, truth_stratum, isVerified per card

Working Memory — TTL-based scratchpad

GET /api/cards?workspace=X&memoryType=working
GET /api/cards?workspace=X&memoryType=working&includeExpired=true
  • Cards with TTL set — auto-hides expired by default
  • Agent-used cards get 1.5x TTL extension
  • Returns expiresAt, isExpired, ttlExtended per card

Card Fields

Field Description
confidence 0.0–1.0 confidence score
truthStratum draft | hypothesis | confirmed
verifiedBy e.g. "human:deeq"
verifiedAt Auto-set server-side
memoryType working | semantic | episodic
ttl Working memory expiry (seconds)
sourceAgent Auto-stamped after identify()

Conflict Detection & Staleness

Conflict detection — structural, no LLM. Auto-detects contradicting cards (e.g. same slug, conflicting truthStratum).

Staleness cascade — upstream changes mark dependents stale. When a card is updated, linked cards get isStale until refreshed.


Decision Replay

Reconstruct exactly what an agent knew when a decision was made.

// What did the agent know when "use-clerk" was decided?
const replay = await hs.graph({ from: "use-clerk", mode: "replay" });

// replay.narrative:
// "Decision: [Use Clerk for Auth] made at 2026-02-19T20:59:00Z"
// "Agent knew 1 of 2 connected cards at decision time."
// "⚠️ 1 card(s) were modified after the decision (potential hindsight): [blocker-clerk-migration]"

Use cases: compliance audits, agent debugging, post-mortems.


Utility-Weighted Edges

The graph gets smarter the more you use it. Report success/failure after every agent task.

// Report success/failure — updates utility scores on edges
await hs.feedback({
  cardSlugs: ["use-clerk", "auth-api"],
  outcome: "success",
  taskId: "task-auth-refactor"
});

// Retrieve most useful cards first
GET /api/cards?workspace=X&sortBy=utility

// Graph traversal weighted by utility
GET /api/graph?from=auth-api&weightBy=utility

Cards that consistently help agents succeed get promoted. Cards in failed tasks decay.


Git-Style Memory Branching

Experiment safely without corrupting live memory.

// Fork before an experiment
const branch = await hs.fork({ branchName: "try-new-routing" });

// Make changes in the branch
await hs.store({ slug: "new-approach", title: "...", ... });

// See what changed
await hs.diff({ branchWorkspaceId: branch.branchWorkspaceId });

// Merge if it worked
await hs.merge({ branchWorkspaceId: branch.branchWorkspaceId, strategy: "branch-wins" });

// Or discard if it didn't
await hs.discard({ branchWorkspaceId: branch.branchWorkspaceId });

Requires Pro plan or above.


Agent Identity + Trust

// Register at session start (idempotent)
await hs.identify({ agentSlug: "research-agent" });

// All hs.store() calls auto-stamp sourceAgent
await hs.store({ slug: "finding-001", ... });

// Check trust score
const profile = await hs.profile({ agentSlug: "research-agent" });
// → { trustScore: 0.84, verifiedCards: 42, cardCount: 50 }
// Formula: (verifiedCards/total)×0.7 + min(cardCount/100,1.0)×0.3

Trust & Provenance

Every card carries epistemic metadata.

// Store with provenance
await hs.store({
  slug: "finding-latency",
  body: "p99 latency ~200ms under load",
  confidence: 0.6,
  truthStratum: "hypothesis"  // draft | hypothesis | confirmed
});

// After verification
await hs.store({
  slug: "finding-latency",
  confidence: 0.95,
  truthStratum: "confirmed",
  verifiedBy: "human:deeq"
  // verifiedAt auto-set server-side
});

CLI

# Ingest a README into cards + edges (no manual card creation)
npx hyperstack-core ingest ./README.md
npx hyperstack-core ingest ./docs/                    # whole directory
cat DECISIONS.md | npx hyperstack-core ingest --source decisions  # stdin pipe
npx hyperstack-core ingest ./README.md --dry          # preview without storing

# Store a card
npx hyperstack-core store --slug "use-clerk" --title "Use Clerk" --type decision

# Record a decision
npx hyperstack-core decide --slug "use-clerk" --title "Use Clerk" --rationale "Better DX"

# Check blockers
npx hyperstack-core blockers deploy-prod

# Traverse graph — forward (what this card points to)
npx hyperstack-core graph auth-api --depth 2

# Traverse graph — reverse (what points at this card)
npx hyperstack-core graph auth-api --depth 2 --reverse

# Search
npx hyperstack-core search "authentication setup"

# List
npx hyperstack-core list

Self-Hosted Docker

# With OpenAI embeddings
docker run -d -p 3000:3000 \
  -e DATABASE_URL=postgresql://... \
  -e JWT_SECRET=your-secret \
  -e OPENAI_API_KEY=sk-... \
  ghcr.io/deeqyaqub1-cmd/hyperstack:latest

# Fully local — Ollama embeddings
docker run -d -p 3000:3000 \
  -e DATABASE_URL=postgresql://... \
  -e JWT_SECRET=your-secret \
  -e EMBEDDING_BASE_URL=http://host.docker.internal:11434 \
  -e EMBEDDING_MODEL=nomic-embed-text \
  ghcr.io/deeqyaqub1-cmd/hyperstack:latest

# Keyword only — no embeddings needed
docker run -d -p 3000:3000 \
  -e DATABASE_URL=postgresql://... \
  -e JWT_SECRET=your-secret \
  ghcr.io/deeqyaqub1-cmd/hyperstack:latest

Then set HYPERSTACK_BASE_URL=http://localhost:3000 in your config.

Full guide: SELF_HOSTING.md


Python + LangGraph

pip install hyperstack-py
pip install hyperstack-langgraph
from hyperstack import HyperStack
hs = HyperStack(api_key="hs_...", workspace="my-project")
hs.identify(agent_slug="my-agent")
branch = hs.fork(branch_name="experiment")
hs.merge(branch_workspace_id=branch["branchWorkspaceId"], strategy="branch-wins")
from hyperstack_langgraph import HyperStackClient
client = HyperStackClient(api_key="hs_...", workspace="my-project")

Why Not Mem0 / Zep / Engram?

Feature HyperStack Mem0 Zep Engram
Typed directed relations ✅ 10 types ❌ LLM-extracted ❌ generic
Utility-weighted edges
Git-style branching
Agent identity + trust
Provenance layer
Time-travel
Decision replay
Three memory surfaces
Self-hosted Docker ✅ 1 command ✅ complex
Cross-tool MCP ✅ Cursor+Claude
Cost per retrieval $0 ~$0.002 LLM ~$0.002 LLM usage-based

Mem0 finds "similar" cards. HyperStack finds exactly what blocks task #42.


Setup

  1. Get a free API key: cascadeai.dev/hyperstack
  2. export HYPERSTACK_API_KEY=hs_your_key
  3. npm i hyperstack-core
Plan Price Cards Features
Free $0 50 ALL features — graph, impact, replay (no gate)
Pro $29/mo 500+ Branching, priority support
Team $59/mo 500, 5 API keys Everything in Pro + collaboration
Business $149/mo 2,000, 20 members Everything in Team + scale
Self-hosted $0 Unlimited Full feature parity

License

MIT

About

Typed graph memory for AI agents. Ask "what blocks deploy?" and get exact answers. Works with OpenClaw, Cursor, Claude, LangGraph. Zero LLM cost.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors