Skip to content

feat: populate comparison page with 40+ competitor entries #993

@Aureliolo

Description

@Aureliolo

Summary

Populate the comparison page data file (data/competitors.yaml) with all 40+ agent orchestration frameworks, platforms, and research projects listed in #981.

Context

Issue #981 implemented the comparison page infrastructure:

  • Shared YAML data file (data/competitors.yaml) with schema and 5 proof-of-concept entries (SynthOrg, CrewAI, AutoGen, LangGraph, ChatDev)
  • Docs page generation script (scripts/generate_comparison.py) producing docs/reference/comparison.md
  • Interactive landing page (site/src/pages/compare.astro) with React island (ComparisonTable.tsx) -- filter, sort, expand, mobile card view
  • CI integration in both pages.yml and pages-preview.yml

The infrastructure is complete and working. This issue covers populating the actual competitor data.

Before populating: alignment needed

Before filling in entries, align on:

  1. Feature evaluation criteria -- how to assess each dimension (full/partial/none/planned) consistently across competitors. What counts as "full" vs "partial" for each of the 14 dimensions?
  2. Research methodology -- which sources to trust (official docs, GitHub README, actual code, community reports)?
  3. Accuracy verification -- how to validate claims (test installs, doc review, community input)?
  4. Dimension weighting -- are all 14 dimensions equally important, or should some be highlighted?
  5. Update cadence -- how often to refresh the data as frameworks evolve?

Competitors to add

See #981 for the full list (~50 entries across 8 categories). Web search for any new frameworks that launched since that issue was written.

Categories

  • Multi-agent frameworks (15+ entries)
  • Virtual org simulators (3 entries)
  • Workflow engines (6 entries)
  • Commercial platforms (6 entries)
  • Developer tools (5 entries)
  • Research projects (5 entries)
  • Protocols / standards (3 entries)

Acceptance criteria

  • 40+ competitor entries in data/competitors.yaml
  • Each entry has all 14 dimension values with accurate notes
  • uv run python scripts/generate_comparison.py produces clean Markdown
  • Landing page table renders correctly with full dataset
  • Feature evaluations are defensible and sourced

Metadata

Metadata

Assignees

No one assigned

    Labels

    prio:mediumShould do, but not blockingscope:medium1-3 days of worktype:featureNew feature implementationv0.5Minor version v0.5v0.5.7Patch release v0.5.7

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions