Skip to content

Evaluate LiteLLM: integration prototype, limitations, alternatives #4

@Aureliolo

Description

@Aureliolo

Context

LiteLLM is the primary candidate for provider abstraction (spec 9.3). Before committing to it, we need a thorough evaluation to understand its capabilities, limitations, and whether it meets our requirements across all target providers.

Acceptance Criteria

  • Evaluate all target providers supported: Anthropic, OpenRouter, Ollama
  • Verify cost tracking reliability across providers
  • Test retry and fallback behavior
  • Test tool calling consistency across providers
  • Measure overhead: latency impact, memory usage
  • Verify streaming support quality
  • Test down-provider handling and error propagation
  • Evaluate alternatives: direct SDKs, aisuite, custom abstraction
  • Working prototype with 2+ providers demonstrating core flows
  • Cost tracking verified against known pricing
  • Retry mechanism tested with simulated failures
  • Tool calling tested across at least 2 providers
  • Decision documented with rationale

Dependencies

Design Spec Reference

Section 9.3 — LiteLLM Integration

Metadata

Metadata

Assignees

No one assigned

    Labels

    prio:criticalBlocks other work, must do firstscope:medium1-3 days of workspec:providersDESIGN_SPEC Section 9 - Model Provider Layertype:researchEvaluate options, make tech decisions

    Projects

    No projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions