Worth Reading: Shameless Guesses, Not Hallucinations
In a recent article, Scott Alexander made an interesting point: What AI produces are not hallucinations but shameless guesses (also known as bullshit) because the training process rewards the correct answers but does not penalize the incorrect ones. After all, having an AI model say, “I don’t know that” is not good for business, is it?
On a tangential note, calling those blunders hallucinations was a marketing masterstroke. Not being a native English speaker, I might be missing some nuances, but I feel like hallucinations might be something you’re not responsible for (some of the time), whereas we all know who’s responsible for bullshit and shameless guesses – and responsibility is something the AI companies are clearly trying to stay as far away from as possible.
On another tangential note, if you’re not following Scott Alexander’s blog substack, you’re missing out.