🤔 "ChatGPT has a problem with hallucinations" — I used to hear that regularly, a few months back, when people were talking about made-up nonsense being generated by the model. Now, I start to see people use the word confabulation more often.
💡 It seems that the word confabulation is a better fit, because hallucination is a word that has connotations with senses, and therefore might anthropomorphise the large language model. Confabulation is the phenomenon when the brain fills in gaps in memory, and that seems a better fit for the type of mistakes that ChatGPT and similar models make: It doesn't know what it should write, so it takes the most likely words that fit that sentence and context.
I'm curious: What do you call these types of mistakes? Is "confabulation" a better fit, or will you stick to "hallucination"? Let me know on LinkedIn or Twitter!
(Also posted on my LinkedIn feed)