NewsGuard reports that the 10 leading chatbots collectively repeated misinformation 18% of the time and offered a blank stare – or rather, a non-response – 20.33% of the time.
That’s a 38.33% fail rate.
Which is like saying your GPS gets you lost more than a third of the time, but does it with such confidence, you start wondering if you misunderstood how maps work.
So… why does this happen?
Spoiler: it has everything to do with how LLMs are trained—and what they’re “rewarded” for.
Curious? I unpack it all (with jokes, not jargon) in Artificial Stupelligence.
Launching May 1st on Amazon.






