A programming language where uncertainty is a first-class citizen.
Documentation β’ Quick Start β’ Packages β’ Examples β’ Contributing
If you're using
prism-uncertainty, please migrate to@prism-lang/corenpm uninstall prism-uncertainty npm install @prism-lang/core
Install using your preferred package manager:
# npm
npm install @prism-lang/core
npm install @prism-lang/confidence # optional
# yarn
yarn add @prism-lang/core
yarn add @prism-lang/confidence # optional
# pnpm
pnpm add @prism-lang/core
pnpm add @prism-lang/confidence # optional
# Install CLI globally
npm install -g @prism-lang/cli # or yarn/pnpmGet syntax highlighting and language support for VS Code:
# Download and install the extension
curl -L https://github.com/HaruHunab1320/Prism-TS/releases/download/v0.1.0/prism-lang-0.1.0.vsix -o prism-lang.vsix
code --install-extension prism-lang.vsix
rm prism-lang.vsixFeatures:
- β¨ Full syntax highlighting for all Prism features
- π¨ Semantic colors for confidence operators
- π Light and dark themes optimized for Prism
- π Auto-indentation and bracket matching
Create a file hello.prism:
// hello.prism
const name = "World"
const greeting = llm("Create a friendly greeting for ${name}")
console.log(greeting)
// Make decisions based on confidence
let response = llm("Should we proceed?") ~> 0.75
uncertain if (response) {
high { console.log("β
Proceeding with confidence!") }
medium { console.log("β οΈ Proceeding with caution...") }
low { console.log("β Too uncertain, aborting.") }
}
Run it:
# Execute a Prism file
prism run hello.prism
prism run --watch hello.prism # hot reload while editing
# Stream a one-off LLM prompt (Ctrl+C to cancel)
prism llm --provider claude --model claude-3-haiku --temperature 0.2 --stream "Summarize today's status update"
# Or use the REPL for interactive development
prism
# Evaluate expressions directly
prism eval "2 + 2 ~> 0.99"
# Advanced CLI flags:
# --model <id> Override the provider model
# --timeout <ms> Abort long-running prompts
# --include-reasoning Request reasoning metadata when the provider supports it
# --no-structured-output Force plain text responses (required for streaming)Inside the REPL, use :stream <prompt> to watch tokens arrive in real time (press Ctrl+C to cancel).
import { parse, createRuntime } from '@prism-lang/core';
const code = `
// AI responses with confidence
const analysis = llm("Is this secure?") ~> 0.85
// Confidence-aware decisions
uncertain if (analysis) {
high { deploy() }
medium { review() }
low { abort() }
}
`;
const ast = parse(code);
const runtime = createRuntime();
const result = await runtime.execute(ast);Prism is organized as a monorepo with focused, modular packages:
| Package | Description | Version |
|---|---|---|
@prism-lang/core |
Core language implementation (parser, runtime, types) | |
@prism-lang/confidence |
Confidence extraction from LLMs and other sources | |
@prism-lang/llm |
LLM provider integrations (Claude, Gemini, OpenAI) | |
@prism-lang/cli |
Command-line interface | |
@prism-lang/repl |
Interactive REPL |
Every AI application deals with uncertainty, but traditional languages pretend it doesn't exist. Prism makes uncertainty explicit and manageable.
// Traditional approach: Uncertainty is hidden
let result = llm_call()
if (result) { /* hope for the best */ }
// Prism: Uncertainty is explicit
let result = llm_call() ~> 0.7
uncertain if (result) {
high { proceed_with_confidence() }
medium { add_human_review() }
low { need_more_data() }
}
// Ensemble multiple models with confidence
const claude_says = llm("Analyze risk", { provider: "claude" }) ~> 0.9
const gpt_says = llm("Analyze risk", { provider: "gpt4" }) ~> 0.8
const gemini_says = llm("Analyze risk", { provider: "gemini" }) ~> 0.7
// Automatically use highest confidence result
let best_analysis = claude_says ~||> gpt_says ~||> gemini_says
// Confidence-aware null coalescing
let decision = best_analysis ~?? fallback_analysis ~?? "manual_review"
Need a different provider, model, or temperature for a specific prompt? Pass an options object to llm():
const structured = llm("Summarize the findings", {
provider: "claude",
model: "claude-3-sonnet",
temperature: 0.2,
maxTokens: 400
})
const recalibrated = llm("Explain this reasoning chain", {
extractor: response => response.confidence * 0.8
})
Supported fields: provider, model, temperature, maxTokens, topP, timeout, structuredOutput, includeReasoning, confidenceExtractor (used by providers like @prism-lang/llm), and an extractor function that can override the returned confidence by inspecting the raw response object.
Use stream_llm() to process tokens as they arrive:
let handle = stream_llm("Draft a haiku about autumn rain", { provider: "claude", structuredOutput: false })
let chunk = await handle.next()
while (chunk != null) {
console.log(chunk.text)
chunk = await handle.next()
}
let final = await handle.result()
console.log("Final confidence:", <~ final)
Call handle.cancel() to abort mid-stream (e.g., when a human takes over).
With @prism-lang/confidence:
import { confidence } from '@prism-lang/confidence';
// Extract confidence from any LLM response
const response = await llm("Is this safe?");
const conf = await confidence.extract(response);
// Multiple strategies available
const ensemble = await confidence.fromConsistency(
() => llm("Analyze this"),
{ samples: 5 }
);
// Domain-specific calibration
const calibrated = await confidence.calibrators.security
.calibrate(conf, { type: 'sql_injection' });Note: confidence extraction is currently heuristic for most providers (due to limited log-prob access), so treat scores as decision-support signals rather than strict probabilities.
~>- Assign confidence<~- Extract confidence~*,~/,~+,~-- Confidence-preserving arithmetic~==,~!=,~>,~<- Confidence comparisons~&&,~||- Confidence logical operations~??- Confidence null coalescing~||>- Parallel confidence (ensemble)
// Uncertain conditionals
uncertain if (measurement) {
high { /* >70% confidence */ }
medium { /* 30-70% confidence */ }
low { /* <30% confidence */ }
}
// Uncertain loops
uncertain while (condition) {
high { /* >70% */ }
medium { /* 30-70% */ }
low { /* <30% */ }
default { /* fallback */ }
}
// Deterministic do/while
let count = 0
do {
count = count + 1
} while (count < 3)
- First-class functions and lambdas
- Async/await with confidence propagation
- Module system with
import/export - Confident ternary (
~?) and confident assignment operators (~+=,~-=,~*=,~/=) - Destructuring with confidence preservation
- Rust-style
matchexpressions with guards and patterns - Type checking with
typeofandinstanceof try/catch/finallyerror handling
Note: We use pnpm and Turborepo for development. You'll need pnpm installed to contribute.
# Clone the repository
git clone https://github.com/HaruHunab1320/Prism-TS.git
cd prism
# Install pnpm if you don't have it
npm install -g pnpm
# Install dependencies
pnpm install
# Build all packages
pnpm build
# Run tests
pnpm test
# Start development mode
pnpm devWe use changesets to manage versioning and publishing. This ensures all packages stay in sync and peer dependencies are correctly managed.
-
Make your changes and commit them
-
Create a changeset to describe your changes:
pnpm changeset # or pnpm release:create- Select which packages changed
- Choose the bump type (patch/minor/major)
- Write a description for the changelog
-
Check what will be released:
pnpm release:check
-
Version the packages (updates package.json files and changelogs):
pnpm release:version
This automatically commits the version changes.
-
Publish to npm:
pnpm release:publish
This builds all packages, publishes them, and pushes git tags.
- Never use
pnpm publishdirectly - it won't handle workspace protocols correctly - All @prism-lang/* packages use fixed versioning - they move together
- Changesets automatically handles peer dependency version updates
- The
workspace:*protocol is used for local development and automatically replaced during publishing
Users: Install our packages with any package manager (npm, yarn, pnpm)
npm install @prism-lang/core # Works with npm, yarn, or pnpm!Contributors: Development requires pnpm for workspace management
pnpm install # Must use pnpm for developmentprism/
βββ packages/
β βββ prism-core/ # Core language implementation
β βββ prism-confidence/ # Confidence extraction library
β βββ prism-llm/ # LLM provider integrations
βββ apps/
β βββ cli/ # Command-line interface
β βββ repl/ # Interactive REPL
βββ examples/ # Example Prism programs
βββ docs/ # Documentation
βββ pnpm-workspace.yaml # pnpm workspace configuration
βββ turbo.json # Turborepo configuration
- Getting Started - Quick start guide
- Language Guide - Complete language reference
- API Reference - All functions and operators
- Confidence Extraction - Using @prism-lang/confidence
- Examples - Real-world usage patterns
let code = read_file("user_submission.py")
let safety = llm("Analyze for vulnerabilities: " + code)
uncertain if (safety) {
high {
deploy_to_production()
log("Deployed with confidence: " + (<~ safety))
}
medium {
let results = run_sandboxed_tests(code)
if (results.pass) { deploy_to_staging() }
}
low {
send_to_security_team(code, safety)
}
}
let question = "Will it rain tomorrow?"
// Get predictions from multiple sources
let weather_api = fetch_weather_api() ~> 0.8
let model1 = llm(question, { provider: "claude" }) ~> 0.9
let model2 = llm(question, { provider: "gemini" }) ~> 0.85
let local_sensors = analyze_pressure() ~> 0.7
// Combine predictions with confidence weighting
let consensus = (weather_api ~+ model1 ~+ model2 ~+ local_sensors) ~/ 4
uncertain if (consensus) {
high { "Definitely bring an umbrella! β" }
medium { "Maybe pack a raincoat π§₯" }
low { "Enjoy the sunshine! βοΈ" }
}
We welcome contributions! See our Contributing Guide for details.
- Language features and operators
- Confidence extraction strategies
- LLM provider integrations
- Documentation and examples
- Testing and benchmarks
- Lumina β confidence-native model architecture (R&D)
MIT - See LICENSE for details.
Built with β€οΈ for the uncertain future of programming
Report Bug β’ Request Feature β’ Join Discussion