In context: It is a foregone conclusion that AI models can lack accuracy. Hallucinations and doubling down on wrong information have been an ongoing struggle for developers. Usage varies so much in individual use cases that it’s hard to nail down quantifiable percentages related to AI accuracy. A research team claims it now has those numbers.
The Tow Center for Digital Journalism recently studied eight AI search engines, including ChatGPT Search, Perplexity, Perplexity Pro, Gemini, DeepSeek Search, Grok-2 Search, Grok-3 Search, and Copilot. They tested each for accuracy and recorded how frequently the tools refused to answer.
The researchers randomly chose 200 news articles from 20 news publishers (10 each). They ensured each story returned within the top three results in a Google search when using a quoted excerpt from the article. Then, they performed the same query within each AI search tool and graded accuracy based on whether the search correctly cited A) the article, B) the news organization, and C) the URL.
The researchers then labeled each search based on degrees of accuracy from “completely correct” to “completely incorrect.” As you can see from the diagram below, other than both versions of Perplexity, the AIs did not perform well. Collectively, AI search engines are inaccurate 60 percent of the time. Furthermore, these wrong results were reinforced by the AI’s “confidence” in them.
The study is fascinating because it quantifiably confirms what we have known for a few years – that LLMs are “the slickest con artists of all time.” They report with complete authority that what they say is true even when it is not, sometimes to the point of argument or making up other false assertions when confronted.
In a 2023 anecdotal article, Ted Gioia (The Honest Broker) pointed out dozens of ChatGPT responses, showing that the bot confidently “lies” when responding to numerous queries. While some examples were adversarial queries, many were just general questions.
“If I believed half of what I heard about ChatGPT, I could let it take over The Honest Broker while I sit on the beach drinking margaritas and searching for my lost shaker of salt,” Gioia flippantly noted.
Even when admitting it was wrong, ChatGPT would follow up that admission with more fabricated information. The LLM is seemingly programmed to answer every user input at all costs. The researcher’s data confirms this hypothesis, noting that ChatGPT Search was the only AI tool that answered all 200 article queries. However, it only achieved a 28-percent completely accurate rating and was completely inaccurate 57 percent of the time.
ChatGPT isn’t even the worst of the bunch. Both versions of X’s Grok AI performed poorly, with Grok-3 Search being 94 percent inaccurate. Microsoft’s Copilot was not that much better when you consider that it declined to answer 104 queries out of 200. Of the remaining 96, only 16 were “completely correct,” 14 were “partially correct,” and 66 were “completely incorrect,” making it roughly 70 percent inaccurate.
Arguably, the craziest thing about all this is that the companies making these tools are not transparent about this lack of accuracy while charging the public $20 to $200 per month to access their latest AI models. Moreover, Perplexity Pro ($20/month) and Grok-3 Search ($40/month) answered slightly more queries correctly than their free versions (Perplexity and Grok-2 Search) but had significantly higher error rates (above). Talk about a con.
However, not everyone agrees. TechRadar’s Lance Ulanoff said he might never use Google again after trying ChatGPT Search. He describes the tool as fast, aware, and accurate, with a clean, ad-free interface.
Feel free to read all the details in the Tow Center’s paper published in the Columbia Journalism Review, and let us know what you think.
Source link