Skip to content

Latest commit

 

History

History

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 
 
 

README.md

Esperanto Documentation

Welcome to the Esperanto documentation! Esperanto provides a unified interface for interacting with multiple AI model providers through a consistent API.

🚀 Quick Links

📚 Documentation Structure

Capabilities (What Can I Do?)

Learn about each AI capability and how to use it:

Providers (How Do I Set This Up?)

Complete setup guides for each provider:

Cloud API Providers:

Enterprise Providers:

Local/Self-Hosted:

📊 Provider Comparison Matrix - Compare all providers

Advanced Topics

Deep dives into specialized features:

🎯 Find What You Need

By Goal

"I want to generate text"Language Models GuideChoose Provider

"I want semantic search"Embeddings GuideReranking Guide

"I want voice capabilities"Speech-to-Text Guide or Text-to-Speech Guide

"I need privacy/local deployment"Ollama, Transformers, or OpenAI-Compatible

"I need enterprise features"Azure OpenAI or Vertex AI

"I want the best quality"OpenAI (GPT-4), Anthropic (Claude), ElevenLabs (voice)

"I want the fastest responses"Groq

"I want cost optimization"DeepSeek, Ollama, or OpenRouter

By Provider

"How do I set up [Provider]?" → Check Providers directory for provider-specific guides

"Which provider should I use?"Provider Comparison Matrix

"What models are available?" → See individual provider pages or use Model Discovery

By Feature

"How do I use task-aware embeddings?"Task-Aware Embeddings

"How do I integrate with LangChain?"LangChain Integration

"How do I configure timeouts?"Timeout Configuration

"How do I use local models?"Transformers Advanced Features

💡 Common Patterns

Single Provider Setup

Use one provider for all capabilities:

from esperanto.factory import AIFactory

# OpenAI for everything
llm = AIFactory.create_language("openai", "gpt-4")
embedder = AIFactory.create_embedding("openai", "text-embedding-3-small")
transcriber = AIFactory.create_speech_to_text("openai", "whisper-1")
speaker = AIFactory.create_text_to_speech("openai", "tts-1")

Multi-Provider Setup

Choose best-in-class for each capability:

# Best reasoning
llm = AIFactory.create_language("anthropic", "claude-3-5-sonnet-20241022")

# Best embeddings with advanced features
embedder = AIFactory.create_embedding("jina", "jina-embeddings-v3")

# Best voice quality
speaker = AIFactory.create_text_to_speech("elevenlabs", "eleven_multilingual_v2")

Local/Cloud Hybrid

Privacy-sensitive data locally, specialized tasks in cloud:

# Local for privacy
local_llm = AIFactory.create_language("ollama", "llama3.2")
local_embedder = AIFactory.create_embedding("transformers", "BAAI/bge-large-en-v1.5")

# Cloud for specialized needs
cloud_llm = AIFactory.create_language("anthropic", "claude-3-5-sonnet-20241022")

📖 Documentation Conventions

Code Examples

All examples use the Factory pattern (recommended):

from esperanto.factory import AIFactory

model = AIFactory.create_language("provider", "model-name")

Direct instantiation is also supported (see individual capability guides).

Environment Variables

Configure providers via environment variables (see Configuration Guide):

# Copy example file
cp .env.example .env

# Edit with your API keys
nano .env

See .env.example in project root for all available variables.

Async Support

All methods have async equivalents with a prefix:

# Sync
response = model.chat_complete(messages)

# Async
response = await model.achat_complete(messages)

🔄 Migration from Old Docs

The documentation has been restructured for better navigation:

Old Structure:

  • docs/llm.md (all LLM providers mixed)
  • docs/embedding/* (embedding-specific)
  • docs/speech_to_text.md (all STT providers)
  • docs/text_to_speech.md (all TTS providers)

New Structure:

  • docs/capabilities/* (what each capability does)
  • docs/providers/* (how to set up each provider)
  • docs/advanced/* (specialized topics)

Key Changes:

  • Provider-specific info now in dedicated provider pages
  • Capability guides focus on API and usage patterns
  • Environment variables documented per-provider
  • Comparison matrices for easy provider selection

🆘 Getting Help

Documentation Issues

  • Provider setup unclear? → Check the provider page
  • Capability usage unclear? → Check the capability guide
  • Feature not working? → Check provider page troubleshooting section
  • Need examples? → Every guide includes multiple examples

Common Questions

Q: Which provider should I use? → See Provider Comparison Matrix

Q: How do I get API keys? → See Prerequisites section in each provider page

Q: Can I use multiple providers? → Yes! See Multi-Provider Setup above

Q: Do I need to pay for everything? → No! See Ollama, Transformers for free options

Q: How do I configure timeouts? → See Timeout Configuration

Q: What about LangChain integration? → See LangChain Integration

External Resources

🤝 Contributing

See CONTRIBUTING.md for guidelines on contributing to documentation or code.

📄 License

MIT License - See LICENSE for details.


Ready to get started?Quick Start Guide