OpenClaw-powered desktop AI companion with Live2D avatars, voice input, and speech synthesis.
Work in progress - actively under development. Contributions and feedback are welcome.
Built with Electron + React + TypeScript and the Charivo framework.
- Electron - Desktop app shell
- React + TypeScript - Renderer UI
- electron-vite - Build tooling
- Live2D (
@charivo/render-live2d,@charivo/render-core) - Character model rendering and motion playback - Charivo (
@charivo/core,@charivo/llm-core,@charivo/tts-core) - Character session orchestration (LLM/TTS/Renderer) - OpenClaw (
@charivo/llm-provider-openclaw) - Local LLM backend for chat - OpenAI TTS (
@charivo/tts-player-openai) - Direct renderer-side speech synthesis for local use
[Renderer - React]
useCharivo + Live2DPanel/useLive2DRenderer
|
| Charivo events
v
[Renderer - Live2D]
@charivo/render-live2d
|
| IPC (window.api.chat)
v
[Main Process - Node.js]
@charivo/llm-provider-openclaw
|
| HTTP (OpenAI-compatible)
v
[OpenClaw - localhost:18789][Renderer - React]
@charivo/tts-player-openai
|
| HTTPS
v
[OpenAI Audio API]OpenClaw API calls are handled in the Electron main process (Node.js) to avoid renderer CORS/PNA limits. TTS is intentionally direct from the renderer for local development convenience.
Live2D is already integrated through Charivo renderer attachment.
- Live2D renderer hook:
src/renderer/src/hooks/useLive2DRenderer.ts - Live2D panel component:
src/renderer/src/components/Live2DPanel.tsx - Model path config:
src/renderer/src/config/live2d.ts
charivo.attachRenderer(manager) and charivo.setCharacter(APP_CHARACTER) are already wired in the renderer lifecycle.
- OpenClaw installed and running (default:
http://127.0.0.1:18789) - OpenAI API key for TTS
- Node.js and npm
Set OpenClaw connection values in .env:
OPENCLAW_TOKEN=your_openclaw_token
OPENCLAW_BASE_URL=http://127.0.0.1:18789/v1OPENCLAW_BASE_URL defaults to http://127.0.0.1:18789/v1.
Create .env in the project root:
VITE_OPENAI_API_KEY=your_openai_api_key
VITE_OPENAI_TTS_MODEL=gpt-4o-mini-tts
VITE_OPENAI_TTS_VOICE=marinSupported models: tts-1, tts-1-hd, gpt-4o-mini-tts
Supported voices: alloy, echo, fable, marin, onyx, nova, shimmer
If VITE_OPENAI_API_KEY is not set, TTS is disabled and chat still works.
Character profile can be changed in src/renderer/src/config/character.ts.
Direct renderer-side OpenAI usage exposes the API key to the local client runtime. Use this setup only for trusted local/dev environments.
- OpenClaw LLM integration via IPC
- Chat UI (message history, error handling)
- Live2D rendering integration (
@charivo/render-live2d) - Direct OpenAI TTS integration (
@charivo/tts-player-openai) - Speech-to-text (
@charivo/stt-core) - WebSocket support for real-time streaming responses
$ npm install$ npm run dev$ npm run typecheck# For Windows
$ npm run build:win
# For macOS
$ npm run build:mac
# For Linux
$ npm run build:linux