Lab Leaderboard

Which AI labs build models that best support user speech?

What We Measure

SpeechMap.AI tests how AI models respond to sensitive and controversial prompts. We measure what models refuse to say, redirect, or filter. Higher scores mean models engage more directly with difficult requests rather than declining or deflecting.

Labs are ranked by their Free Speech Index Score, a time-weighted average of models in each lab's latest release cycle (6 months, anchored to that lab's most recent release). Only labs with a release in the last 6 months are shown. For individual model results, see the Models page.

Last updated: 2026-03-26

RankLabIndexPeak ScoreModels
#1Mistral AI91.098.27
#2xAI82.298.28
#3Google DeepMind77.488.06
#4TNG Technology Consulting77.282.62
#5Arcee AI74.782.22
#6Zhipu AI73.285.89
#7DeepSeek71.691.39
#8Prime Intellect69.269.21
#9xiaomi62.662.61
#10inception56.556.51
#11NVIDIA54.567.63
#12Moonshot AI54.165.75
#13Allen Institute for AI53.876.94
#14Amazon51.265.82
#15MiniMax51.055.23
#16stepfun49.949.91
#17liquid46.346.31
#18OpenAI44.469.613
#19Alibaba41.756.910
#20Anthropic39.360.110
#21ByteDance32.034.43