An intelligent lecture visualization tool that captures live video and audio, interprets content using AI, and creates dynamic mind maps to enhance learning - especially beneficial for those with attention deficits.
- Live Video/Audio Capture: Real-time lecture capture using LiveKit
- AI-Powered Interpretation: Automatic content analysis using OpenAI GPT
- Interactive Mind Maps: Dynamic visual aids generated with ReactFlow
- Progressive Learning: Concepts build up naturally as the lecture progresses
- Attention-Friendly: Designed to help learners stay engaged and organized
The application consists of:
-
Frontend (Next.js + React)
- LiveKit integration for media capture
- ReactFlow for mind map visualization
- Real-time concept rendering
-
API Layer
- LiveKit token generation (
/api/livekit/token) - Transcript processing (
/api/process-transcript) - OpenAI integration for concept extraction
- LiveKit token generation (
-
Key Components
LiveKitCapture: Handles video/audio streamingMindMapVisualization: Renders interactive concept maps- Lecture page: Main interface for live sessions
- Demo page: Interactive demonstration
- Node.js 18+
- LiveKit account (livekit.io)
- OpenAI API key (platform.openai.com)
-
Clone the repository
git clone https://github.com/a1exung/smart-sketch.git cd smart-sketch -
Install dependencies
npm install
-
Configure environment variables
Copy the example environment file:
cp .env.example .env.local
Edit
.env.localwith your credentials:# LiveKit Configuration LIVEKIT_URL=wss://your-livekit-server.livekit.cloud LIVEKIT_API_KEY=your-api-key LIVEKIT_API_SECRET=your-api-secret # OpenAI Configuration OPENAI_API_KEY=your-openai-api-key # Next.js Public Configuration NEXT_PUBLIC_LIVEKIT_URL=wss://your-livekit-server.livekit.cloud
-
Run the development server
npm run dev
-
Open your browser
Navigate to http://localhost:3000
- Navigate to the home page
- Click "Start Lecture Session"
- Allow camera and microphone permissions
- Click "Start Session" to begin capturing
- Watch as concepts are extracted and visualized in real-time
- Click "View Demo" from the home page
- Use "Next Concept" to see how concepts progressively build
- Explore the interactive mind map features
smart-sketch/
├── src/
│ ├── app/
│ │ ├── api/ # API routes
│ │ │ ├── livekit/token/ # LiveKit token generation
│ │ │ └── process-transcript/ # AI transcript processing
│ │ ├── lecture/ # Lecture session page
│ │ ├── demo/ # Demo page
│ │ ├── layout.tsx # Root layout
│ │ ├── page.tsx # Home page
│ │ └── globals.css # Global styles
│ ├── components/
│ │ ├── LiveKitCapture.tsx # Video/audio capture component
│ │ └── MindMapVisualization.tsx # Mind map rendering
│ ├── lib/
│ │ └── utils.ts # Utility functions
│ └── types/
│ └── index.ts # TypeScript definitions
├── public/ # Static assets
├── package.json
├── tsconfig.json
├── tailwind.config.ts
└── next.config.js
npm run dev- Start development servernpm run build- Build for productionnpm start- Start production servernpm run lint- Run ESLintnpm run type-check- Run TypeScript type checking
Edit src/app/api/process-transcript/route.ts to customize how AI extracts concepts from transcripts.
Customize node styles in src/components/MindMapVisualization.tsx and ReactFlow styles in src/app/globals.css.
In src/components/LiveKitCapture.tsx, modify the interval timing for transcript processing (currently 10 seconds).
Contributions are welcome! Please feel free to submit a Pull Request.
This project is open source and available under the MIT License.
- LiveKit - Real-time video/audio infrastructure
- ReactFlow - Interactive node-based graphs
- OpenAI - AI-powered content interpretation
- Next.js - React framework
For issues and questions, please open an issue on GitHub.
- Speech-to-text integration for better transcription
- Multiple visualization layouts (tree, radial, hierarchical)
- Session recording and playback
- Collaborative learning features
- Export mind maps as images/PDFs
- Integration with note-taking apps
- Mobile app support