-
Notifications
You must be signed in to change notification settings - Fork 0
feat: populate comparison page with 40+ competitor entries #993
Copy link
Copy link
Labels
prio:mediumShould do, but not blockingShould do, but not blockingscope:medium1-3 days of work1-3 days of worktype:featureNew feature implementationNew feature implementationv0.5Minor version v0.5Minor version v0.5v0.5.7Patch release v0.5.7Patch release v0.5.7
Description
Summary
Populate the comparison page data file (data/competitors.yaml) with all 40+ agent orchestration frameworks, platforms, and research projects listed in #981.
Context
Issue #981 implemented the comparison page infrastructure:
- Shared YAML data file (
data/competitors.yaml) with schema and 5 proof-of-concept entries (SynthOrg, CrewAI, AutoGen, LangGraph, ChatDev) - Docs page generation script (
scripts/generate_comparison.py) producingdocs/reference/comparison.md - Interactive landing page (
site/src/pages/compare.astro) with React island (ComparisonTable.tsx) -- filter, sort, expand, mobile card view - CI integration in both
pages.ymlandpages-preview.yml
The infrastructure is complete and working. This issue covers populating the actual competitor data.
Before populating: alignment needed
Before filling in entries, align on:
- Feature evaluation criteria -- how to assess each dimension (full/partial/none/planned) consistently across competitors. What counts as "full" vs "partial" for each of the 14 dimensions?
- Research methodology -- which sources to trust (official docs, GitHub README, actual code, community reports)?
- Accuracy verification -- how to validate claims (test installs, doc review, community input)?
- Dimension weighting -- are all 14 dimensions equally important, or should some be highlighted?
- Update cadence -- how often to refresh the data as frameworks evolve?
Competitors to add
See #981 for the full list (~50 entries across 8 categories). Web search for any new frameworks that launched since that issue was written.
Categories
- Multi-agent frameworks (15+ entries)
- Virtual org simulators (3 entries)
- Workflow engines (6 entries)
- Commercial platforms (6 entries)
- Developer tools (5 entries)
- Research projects (5 entries)
- Protocols / standards (3 entries)
Acceptance criteria
- 40+ competitor entries in
data/competitors.yaml - Each entry has all 14 dimension values with accurate notes
-
uv run python scripts/generate_comparison.pyproduces clean Markdown - Landing page table renders correctly with full dataset
- Feature evaluations are defensible and sourced
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
prio:mediumShould do, but not blockingShould do, but not blockingscope:medium1-3 days of work1-3 days of worktype:featureNew feature implementationNew feature implementationv0.5Minor version v0.5Minor version v0.5v0.5.7Patch release v0.5.7Patch release v0.5.7