Skip to content

Jshengdev/Gradient

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

33 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DreamLens

AI-powered accessibility app generator for Raven Glass AR glasses.

DreamLens builds personalized accessibility applications through a two-stage process: user onboarding collects preferences, then an AI pipeline generates custom apps with features and UI components tailored to individual needs.

Quick Start

See QUICK_START.md for installation and usage instructions.

Architecture

DreamLens consists of four core components:

┌─────────────────────────────────────────────────────────────┐
│ STAGE 1: Onboarding                                         │
│ Collects user profile through interactive questions         │
│ Location: onboarding/                                       │
└─────────────────────────────────────────────────────────────┘
                           ↓
                    UserProfile JSON
                           ↓
┌─────────────────────────────────────────────────────────────┐
│ STAGE 2: Orchestration Pipeline                             │
│ AI pipeline: Classifier → UI Selector → Generator           │
│ Location: orchestration/                                    │
└─────────────────────────────────────────────────────────────┘
                           ↓
                  Generated Python App
                           ↓
┌─────────────────────────────────────────────────────────────┐
│ LIBRARIES                                                    │
│ • features/ - 7 modules, 8 accessibility features           │
│ • component_library/ - 24 UI components in 6 categories     │
└─────────────────────────────────────────────────────────────┘

Stage 1: Onboarding Flow

User interaction flow to collect accessibility needs:

Welcome → Gaze Acknowledgment → Voice Acknowledgment
            ↓
Question Screens (3 questions)
            ↓
Profile Creation
            ↓
UserProfile JSON (output)

Output includes sensory preferences, stress triggers, cognitive needs, and environmental requirements.

See onboarding/README.md

Stage 2: Orchestration Pipeline (v2.0 - Optimized)

Five-step LangGraph pipeline that generates personalized apps:

1. Classifier (Grok)      → Analyzes pain point, selects features, documents data_produced
2. UI Selector (Grok)     → Matches components to feature data, outputs code_snippets
3. Pre-Validation         → Catches errors BEFORE expensive generation
4. Generator (Opus 4.5)   → Skeleton + chain-of-thought code generation
5. Validator              → Syntax, imports, execution check (5 retries)

Key Optimizations:

  • Data-driven architecture: Features produce data → UI components consume data
  • Pre-validation catches errors before expensive LLM calls
  • Chain-of-thought prompting with skeleton templates
  • Multi-model strategy: Grok (fast reasoning) + Opus 4.5 (quality code) + Haiku (quick fixes)

Output is a complete Python application ready to run on Raven Glass.

See orchestration/README.md

Project Structure

DreamLens/
├── onboarding/              # Stage 1: User profile creation
│   ├── step-2-components/   # Visual components
│   ├── step-3-animations/   # Animation system
│   ├── step-4-screens/      # Individual screens
│   ├── step-5-orchestrator/ # Flow orchestration
│   └── step-6-testing/      # Tests and simulators
│
├── orchestration/           # Stage 2: AI pipeline
│   ├── step-1-foundation/   # Pipeline architecture
│   ├── step-2-classifier/   # Feature classification
│   ├── step-3-ui-selector/  # UI selection
│   ├── step-4-generator/    # Code generation
│   └── integration/         # Integrated pipeline
│
├── features/                # Feature library
│   ├── modules/             # 7 hardware modules
│   └── instructions/        # Feature documentation
│
├── component_library/       # UI component library
│   ├── components/          # 24 UI components in 6 categories
│   ├── base/                # Base classes
│   └── styles/              # 4 style presets
│
├── shared/                  # Shared utilities
│   ├── schemas/             # Data schemas
│   ├── validators/          # Validation logic
│   └── templates/           # Code templates
│
├── raven-framework/         # Raven SDK reference
│
└── .archive/                # Deprecated content

Features Library

7 hardware modules providing accessibility capabilities:

  • EyeTrackerModule - Gaze tracking and detection
  • CameraModule - Visual scene capture
  • DisplayModule - UI rendering and animation
  • SpeakerModule - Audio playback and generation
  • MicrophoneModule - Audio input capture
  • IMUModule - Motion and orientation sensing
  • AIHelperModule - LLM integration and multimodal processing

8 accessibility features built on these modules:

  • Meditation Guide
  • Scene Understanding
  • Calming Audio
  • Text-to-Speech
  • Visual Dimming
  • Panic Intervention
  • Voice Assistant
  • Audio Transcription

See features/README.md

Component Library

24 UI components across 6 categories:

  • Displays (4): InfoCard, LiveCaptions, NotificationBubble, TooltipOverlay
  • Indicators (5): ProgressRing, StatusPill, DataBar, HeatmapDot, AlertBadge
  • Meters (3): AudioLevelMeter, SensoryGauge, PulseWave
  • Interactive (4): AdaptiveBlending, DwellButton, RadialMenu, CircularDial
  • Guides (3): BreathingCircle, FocusDot, GuidingSprite
  • Companions (3): SlimeCompanion, CalmingOrb, BlobFriend

All components support 4 style presets: glassmorphism, soft, minimal, neon.

See component_library/README.md

Data Flow

Complete flow from user input to deployed app:

User Input
    ↓
Onboarding Flow (onboarding/step-5-orchestrator/onboarding_flow_v2.py)
    ↓
UserProfile {
    sensory_profile, stress_profile, cognitive_profile,
    environmental_needs, coping_strategies
}
    ↓
Orchestration Pipeline (orchestration/integration/integrated_pipeline.py)
    ├─ Step 1: Foundation (defines pipeline state)
    ├─ Step 2: Classifier (Grok - selects features with data_produced)
    ├─ Step 3: UI Selector (Grok - selects components with code_snippets)
    ├─ Step 4: Pre-Validation (catches errors before generation)
    └─ Step 5: Generator (Opus 4.5 - skeleton + chain-of-thought)
    ↓
Validation Loop (up to 5 retries: Opus → Opus → Haiku → Haiku → Opus)
    ↓
Generated Agent Code (orchestration/output/generated_agent.py)
    ├─ Imports from features/modules/
    └─ Imports from component_library/
    ↓
Run on Raven Glass

Development

Prerequisites

  • Python 3.10+
  • PySide6 (Qt GUI framework)
  • OpenRouter or OpenAI API key

Installation

# Clone repository
git clone https://github.com/your-org/DreamLens.git
cd DreamLens

# Initialize submodules (for raven-framework)
git submodule update --init --recursive

# Create virtual environment
python3 -m venv venv
source venv/bin/activate  # Windows: venv\Scripts\activate

# Install dependencies
pip install -r requirements.txt

# Install raven-framework
pip install -e raven-framework/

# Configure environment
cp .env.example .env
# Edit .env and add your API keys

Running the Application

# Full pipeline (onboarding + generation + run)
python workflow_manager.py

# Onboarding only
python -m onboarding.flow

# Orchestration pipeline only
python -m orchestration.integration.integrated_pipeline

Running Tests

pytest tests/ -v
pytest orchestration/tests/ -v

Documentation

License

MIT

About

Empowering the future of human interaction and accesibility.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors