Category: error analysis
-
Getting AI to not BE racist is harder than getting AI to not SEEM racist

AI systems tell you what you want to hear, not what they “think” is right. One bad aspect of this is that they have learned not to *appear* racist, even if they really are.
-
How do AI detectors work? (And why shouldn’t you trust them?)

Ever wondered how AI detectors work — and why they don’t really work that well? Here’s an overview of them and some of the big issues that come up while designing AI systems in general.
-
“Wrong but plausible” is the worst kind of wrong for AI

Generally, it’s better to have a small error than a big error. But the upside to a big error is that it’s easier to notice than a small one. With AI-generated information, plausible errors are more likely to go unnoticed, and thus may be a lot worse than just getting the answer totally wrong.

