With the advent of artificial intelligence (AI), the demarcation between reality and the virtual world is blurring. AI is often used to spread misinformation these days. So the question arises: can it be trusted to detect lies spreading online? A new study highlights not every human is comfortable relying on chatbots to separate truth from misinformation.
According to researcher Jason Thatcher, who serves as the Tandean Rustandy Esteemed Endowed Chair and Professor at the Leeds School of Business at the University of Colorado Boulder, political beliefs might play a crucial role in determining whether people trust AI-driven fact-checking systems.
From geopolitics to AI, the timesnownew.com gets the views from the best in the world. With the help of US experts, we decode how people respond to AI-led versus human-based fact-checking.
What Did The Study Find?
Notably, the study noted how individuals reacted when information was verified using a human and an AI tool. Thatcher explained the motivation behind the study: “How do people respond to a bot or artificial intelligence application versus a human doing fact-checking?”
As per him, the findings surprised the experts. He found that more progressive people trusted humans rather than bots. However, more conservation people really wanted the voice of a human behind the fact-checking experience. The study highlights that some users are okay with trusting AI systems; others prefer human involvement when it is about verifying online information.
How Was The Research Conducted?
As per Thatcher, the team conducted two experiments involving hundreds of participants across two countries.
One study involved a broader group of US citizens recruited through the online research platform Prolific. In the second experiment, researchers intentionally chose an equal number of progressive and conservative participants to better understand whether political orientation influenced their trust mechanism in fact-checking.
Thatcher explained, “That wasn’t something we really set out to find. We were interested in political orientation, but we did not expect to see differences across the two different value systems.”
Do People Trust AI Fact-Checkers?
The professor asserted that the key takeaway was not that AI fact-checkers fail, but that trust differs based on the audience. He revealed, “For some people, AI fact-checkers work great, but for all people, human fact-checkers work equally well.” This research comes at a time when big tech organisations are rapidly turning to AI tools to detect misinformation, verify content and moderate it at a large scale.
He mentioned, “Before social media, before online platforms, we had rules and norms and policies which we held reporters accountable for. If they printed something, they had to fact check it themselves.”
What Could Be The Solution?
As per the professor, instead of depending on a single solution, misinformation needs multiple layers of verification that should involve technology and public participation. He added, “It’s not a simple problem. But the solution to the problem seems to be more than one mechanism to fact check.” Moreover, he added that citizens should be more active in identifying fake information in their communities.

