Skip to content

HaruHunab1320/Prism-TS

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

212 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

🌟 Prism: Programming with Confidence in an Uncertain World

Prism logo

npm version TypeScript License: MIT

A programming language where uncertainty is a first-class citizen.

Documentation β€’ Quick Start β€’ Packages β€’ Examples β€’ Contributing


πŸ“¦ Important: Package Migration

If you're using prism-uncertainty, please migrate to @prism-lang/core

npm uninstall prism-uncertainty
npm install @prism-lang/core

πŸš€ Quick Start

Install using your preferred package manager:

# npm
npm install @prism-lang/core
npm install @prism-lang/confidence  # optional

# yarn
yarn add @prism-lang/core
yarn add @prism-lang/confidence     # optional

# pnpm
pnpm add @prism-lang/core
pnpm add @prism-lang/confidence     # optional

# Install CLI globally
npm install -g @prism-lang/cli      # or yarn/pnpm

🎨 VS Code Extension

Get syntax highlighting and language support for VS Code:

# Download and install the extension
curl -L https://github.com/HaruHunab1320/Prism-TS/releases/download/v0.1.0/prism-lang-0.1.0.vsix -o prism-lang.vsix
code --install-extension prism-lang.vsix
rm prism-lang.vsix

Features:

  • ✨ Full syntax highlighting for all Prism features
  • 🎨 Semantic colors for confidence operators
  • 🌈 Light and dark themes optimized for Prism
  • πŸ“ Auto-indentation and bracket matching

Your First Prism Program

Using the CLI (Recommended)

Create a file hello.prism:

// hello.prism
const name = "World"
const greeting = llm("Create a friendly greeting for ${name}")

console.log(greeting)

// Make decisions based on confidence
let response = llm("Should we proceed?") ~> 0.75
uncertain if (response) {
  high { console.log("βœ… Proceeding with confidence!") }
  medium { console.log("⚠️ Proceeding with caution...") }
  low { console.log("❌ Too uncertain, aborting.") }
}

Run it:

# Execute a Prism file
prism run hello.prism
prism run --watch hello.prism  # hot reload while editing

# Stream a one-off LLM prompt (Ctrl+C to cancel)
prism llm --provider claude --model claude-3-haiku --temperature 0.2 --stream "Summarize today's status update"

# Or use the REPL for interactive development
prism

# Evaluate expressions directly
prism eval "2 + 2 ~> 0.99"

# Advanced CLI flags:
#   --model <id>               Override the provider model
#   --timeout <ms>             Abort long-running prompts
#   --include-reasoning        Request reasoning metadata when the provider supports it
#   --no-structured-output     Force plain text responses (required for streaming)

Inside the REPL, use :stream <prompt> to watch tokens arrive in real time (press Ctrl+C to cancel).

Using as a TypeScript Library

import { parse, createRuntime } from '@prism-lang/core';

const code = `
  // AI responses with confidence
  const analysis = llm("Is this secure?") ~> 0.85
  
  // Confidence-aware decisions
  uncertain if (analysis) {
    high { deploy() }
    medium { review() }
    low { abort() }
  }
`;

const ast = parse(code);
const runtime = createRuntime();
const result = await runtime.execute(ast);

πŸ“š Packages

Prism is organized as a monorepo with focused, modular packages:

Package Description Version
@prism-lang/core Core language implementation (parser, runtime, types) npm
@prism-lang/confidence Confidence extraction from LLMs and other sources npm
@prism-lang/llm LLM provider integrations (Claude, Gemini, OpenAI) npm
@prism-lang/cli Command-line interface npm
@prism-lang/repl Interactive REPL npm

✨ Why Prism?

Every AI application deals with uncertainty, but traditional languages pretend it doesn't exist. Prism makes uncertainty explicit and manageable.

🎯 Uncertainty as a First-Class Citizen

// Traditional approach: Uncertainty is hidden
let result = llm_call()
if (result) { /* hope for the best */ }

// Prism: Uncertainty is explicit
let result = llm_call() ~> 0.7
uncertain if (result) {
  high { proceed_with_confidence() }
  medium { add_human_review() }
  low { need_more_data() }
}

🧠 Built for the AI Era

// Ensemble multiple models with confidence
const claude_says = llm("Analyze risk", { provider: "claude" }) ~> 0.9
const gpt_says = llm("Analyze risk", { provider: "gpt4" }) ~> 0.8
const gemini_says = llm("Analyze risk", { provider: "gemini" }) ~> 0.7

// Automatically use highest confidence result
let best_analysis = claude_says ~||> gpt_says ~||> gemini_says

// Confidence-aware null coalescing
let decision = best_analysis ~?? fallback_analysis ~?? "manual_review"

βš™οΈ Configurable LLM Calls

Need a different provider, model, or temperature for a specific prompt? Pass an options object to llm():

const structured = llm("Summarize the findings", {
  provider: "claude",
  model: "claude-3-sonnet",
  temperature: 0.2,
  maxTokens: 400
})

const recalibrated = llm("Explain this reasoning chain", {
  extractor: response => response.confidence * 0.8
})

Supported fields: provider, model, temperature, maxTokens, topP, timeout, structuredOutput, includeReasoning, confidenceExtractor (used by providers like @prism-lang/llm), and an extractor function that can override the returned confidence by inspecting the raw response object.

πŸ”Š Streaming Inside Prism

Use stream_llm() to process tokens as they arrive:

let handle = stream_llm("Draft a haiku about autumn rain", { provider: "claude", structuredOutput: false })

let chunk = await handle.next()
while (chunk != null) {
  console.log(chunk.text)
  chunk = await handle.next()
}

let final = await handle.result()
console.log("Final confidence:", <~ final)

Call handle.cancel() to abort mid-stream (e.g., when a human takes over).

πŸ“Š Confidence Extraction Made Easy

With @prism-lang/confidence:

import { confidence } from '@prism-lang/confidence';

// Extract confidence from any LLM response
const response = await llm("Is this safe?");
const conf = await confidence.extract(response);

// Multiple strategies available
const ensemble = await confidence.fromConsistency(
  () => llm("Analyze this"),
  { samples: 5 }
);

// Domain-specific calibration
const calibrated = await confidence.calibrators.security
  .calibrate(conf, { type: 'sql_injection' });

Note: confidence extraction is currently heuristic for most providers (due to limited log-prob access), so treat scores as decision-support signals rather than strict probabilities.

πŸ”§ Language Features

Confidence Operators

  • ~> - Assign confidence
  • <~ - Extract confidence
  • ~*, ~/, ~+, ~- - Confidence-preserving arithmetic
  • ~==, ~!=, ~>, ~< - Confidence comparisons
  • ~&&, ~|| - Confidence logical operations
  • ~?? - Confidence null coalescing
  • ~||> - Parallel confidence (ensemble)

Control Flow

// Uncertain conditionals
uncertain if (measurement) {
  high { /* >70% confidence */ }
  medium { /* 30-70% confidence */ }
  low { /* <30% confidence */ }
}

// Uncertain loops
uncertain while (condition) {
  high { /* >70% */ }
  medium { /* 30-70% */ }
  low { /* <30% */ }
  default { /* fallback */ }
}

// Deterministic do/while
let count = 0
do {
  count = count + 1
} while (count < 3)

Modern Language Features

  • First-class functions and lambdas
  • Async/await with confidence propagation
  • Module system with import/export
  • Confident ternary (~?) and confident assignment operators (~+=, ~-=, ~*=, ~/=)
  • Destructuring with confidence preservation
  • Rust-style match expressions with guards and patterns
  • Type checking with typeof and instanceof
  • try/catch/finally error handling

πŸ› οΈ Development

Note: We use pnpm and Turborepo for development. You'll need pnpm installed to contribute.

# Clone the repository
git clone https://github.com/HaruHunab1320/Prism-TS.git
cd prism

# Install pnpm if you don't have it
npm install -g pnpm

# Install dependencies
pnpm install

# Build all packages
pnpm build

# Run tests
pnpm test

# Start development mode
pnpm dev

πŸ“¦ Publishing Packages

We use changesets to manage versioning and publishing. This ensures all packages stay in sync and peer dependencies are correctly managed.

Release Workflow

  1. Make your changes and commit them

  2. Create a changeset to describe your changes:

    pnpm changeset
    # or
    pnpm release:create
    • Select which packages changed
    • Choose the bump type (patch/minor/major)
    • Write a description for the changelog
  3. Check what will be released:

    pnpm release:check
  4. Version the packages (updates package.json files and changelogs):

    pnpm release:version

    This automatically commits the version changes.

  5. Publish to npm:

    pnpm release:publish

    This builds all packages, publishes them, and pushes git tags.

Important Notes

  • Never use pnpm publish directly - it won't handle workspace protocols correctly
  • All @prism-lang/* packages use fixed versioning - they move together
  • Changesets automatically handles peer dependency version updates
  • The workspace:* protocol is used for local development and automatically replaced during publishing

For Users vs Contributors

Users: Install our packages with any package manager (npm, yarn, pnpm)

npm install @prism-lang/core    # Works with npm, yarn, or pnpm!

Contributors: Development requires pnpm for workspace management

pnpm install              # Must use pnpm for development

Repository Structure

prism/
β”œβ”€β”€ packages/
β”‚   β”œβ”€β”€ prism-core/        # Core language implementation
β”‚   β”œβ”€β”€ prism-confidence/  # Confidence extraction library
β”‚   └── prism-llm/         # LLM provider integrations
β”œβ”€β”€ apps/
β”‚   β”œβ”€β”€ cli/               # Command-line interface
β”‚   └── repl/              # Interactive REPL
β”œβ”€β”€ examples/              # Example Prism programs
β”œβ”€β”€ docs/                  # Documentation
β”œβ”€β”€ pnpm-workspace.yaml    # pnpm workspace configuration
└── turbo.json            # Turborepo configuration

πŸ“– Documentation

πŸ“š Full Documentation

🌟 Examples

AI Safety Analysis

let code = read_file("user_submission.py")
let safety = llm("Analyze for vulnerabilities: " + code)

uncertain if (safety) {
  high { 
    deploy_to_production()
    log("Deployed with confidence: " + (<~ safety))
  }
  medium {
    let results = run_sandboxed_tests(code)
    if (results.pass) { deploy_to_staging() }
  }
  low {
    send_to_security_team(code, safety)
  }
}

Multi-Model Consensus

let question = "Will it rain tomorrow?"

// Get predictions from multiple sources
let weather_api = fetch_weather_api() ~> 0.8
let model1 = llm(question, { provider: "claude" }) ~> 0.9  
let model2 = llm(question, { provider: "gemini" }) ~> 0.85
let local_sensors = analyze_pressure() ~> 0.7

// Combine predictions with confidence weighting
let consensus = (weather_api ~+ model1 ~+ model2 ~+ local_sensors) ~/ 4

uncertain if (consensus) {
  high { "Definitely bring an umbrella! β˜”" }
  medium { "Maybe pack a raincoat πŸ§₯" }
  low { "Enjoy the sunshine! β˜€οΈ" }
}

🀝 Contributing

We welcome contributions! See our Contributing Guide for details.

Key areas for contribution:

  • Language features and operators
  • Confidence extraction strategies
  • LLM provider integrations
  • Documentation and examples
  • Testing and benchmarks

R&D

  • Lumina β€” confidence-native model architecture (R&D)

πŸ“„ License

MIT - See LICENSE for details.


Built with ❀️ for the uncertain future of programming

Report Bug β€’ Request Feature β€’ Join Discussion

About

A programming language for LLMs where uncertainty is a first-class citizen.

Resources

Stars

Watchers

Forks

Packages

 
 
 

Contributors