This Company Says AI Models Hallucinate Less Than Humans: Here’s What It Means

AI models hallucinating has become a serious problem especially when the technology is becoming a core part of the services. However, one of the top AI companies feels otherwise and points out that AI models are making mistakes but less than humans.
Anthropic recently introduced the Claude 4 AI models and its CEO has reportedly made a strong statement in defence of the technology. Dario Amodei, CEO, Anthropic was speaking during the event, where he also talked about the new AI models and what they offer to compete with ChatGPT and Gemini.
AI Hallucinates But Humans Do More: What He Means
Amodei was asked about the emergence and reliability of AI where hallucinations are a big concern but he was quoted saying that AI has issues but less than humans. It really depends how you measure it, but I suspect that AI models probably hallucinate less than humans, but they hallucinate in more surprising ways," he was quoted in this report by TechCrunch.
He further explained his statement by saying TV broadcasters, politicians and humans make as many mistakes as AI but you cannot ignore their obvious intelligence. Amodei may have a point with this quote but that is a general way of looking at things which might not fit into the equation when you have AI in the picture.
Having said that, he was quick to accept the limitations of AI models currently which needs to be ironed out so that mistakes don’t become the stick with which we beat the technology forever.
Giants like OpenAI and Google have been a victim of this issue and forced to correct their mistakes in the last few months. In fact, you have Apple also facing the news AI summary errors that made the company pause its rollout and only release when the errors were taken out. Anthropic themselves have faced legal cases for incorrect citation by Claude which forces the likes of Amodei to defend their structure but eventually fix the issues in the long run.