Fears AI factcheckers on X could increase promotion of conspiracy theories

TruthLens AI Suggested Headline:

"Concerns Raised Over AI Fact-Checking System on X and Its Impact on Misinformation"

View Raw Article Source (External Link)
Raw Article Publish Date:
AI Analysis Average Score: 7.6
These scores (0-10 scale) are generated by Truthlens AI's analysis, assessing the article's objectivity, accuracy, and transparency. Higher scores indicate better alignment with journalistic standards. Hover over chart points for metric details.

TruthLens AI Summary

Elon Musk's social media platform, X, has initiated a pilot program that employs artificial intelligence (AI) chatbots to draft community notes aimed at fact-checking contentious posts. This decision has raised concerns among experts, including former UK technology minister Damian Collins, who warned that reliance on AI for fact-checking could exacerbate the spread of misinformation and conspiracy theories. The AI-generated notes will be reviewed by humans before publication, according to Keith Coleman, vice president of product at X. He emphasized that the goal is to create a system where AI assists humans rather than replaces them, arguing that this collaboration could enhance the quality and trustworthiness of information. However, Collins criticized the move as potentially allowing for the manipulation of information, raising questions about the integrity of content presented to the platform's vast user base of around 600 million people.

The announcement follows a broader trend among major tech companies to move away from human fact-checkers. For instance, Google recently stated that user-generated fact-checks would be deprioritized in search results, while Meta has similarly shifted to a community notes system devoid of human oversight. X's research paper accompanying the pilot claims that AI can produce faster, high-quality notes and that trust in these notes derives from user evaluations rather than the authorship. Despite these assertions, experts caution that AI's limitations in understanding nuance and context could lead to the propagation of misinformation. Andy Dudfield from the UK fact-checking organization Full Fact expressed concern that the new AI system could overwhelm human reviewers, potentially allowing AI to publish unchecked information. Recent studies suggest that users tend to trust human-authored notes more than AI-generated ones, indicating a potential challenge for X in maintaining credibility as it navigates this new approach to fact-checking.

TruthLens AI Analysis

You need to be a member to generate the AI analysis for this article.

Log In to Generate Analysis

Not a member yet? Register for free.

Unanalyzed Article Content

A decision by Elon Musk’sXsocial media platform to enlist artificial intelligence chatbots to draft factchecks risks increasing the promotion of “lies and conspiracy theories”, a former UK technology minister has warned.

Damian Collins accused Musk’s firm of “leaving it to bots to edit the news” after X announced on Tuesday that it would allow large language models to write community notes to clarify or correct contentious posts, before users approve them for publication. The notes have previously been written by humans.

X said using AI to write factchecking notes – which sit beneath some X posts – “advances the state of the art in improving information quality on the internet”.

Keith Coleman, the vice-president of product at X, said humans would review AI-generated notes and the note would appear only if people with a variety of viewpoints found it useful.

“We designed this pilot to be AI helping humans, with humans deciding,” he said. “We believe this can deliver both high quality and high trust. Additionally we published a paper along with the launch of our pilot, co-authored with professors and researchers from MIT, University of Washington, Harvard and Stanford laying out why this combination of AI and humans is such a promising direction.”

But Collins said the system was already open to abuse and that AI agents working on community notes could allow “the industrial manipulation of what people see and decide to trust” on the platform, which has about 600 million users.

It is the latest pushback against human factcheckers by US tech firms. Last monthGoogle said user-created factchecks, including by professional factchecking organisations, would be deprioritised in its search results. It said such checks were “no longer providing significant additional value for users”. In January,Meta announced it was getting rid of human factcheckersin the US and would adopt its own community notes system on Instagram, Facebook and Threads.

X’s research paper outlining its new factchecking system criticised professional factchecking as often slow and limited in scale and said it “lacks trust by large sections of the public”.

AI-created community notes “have the potential to be faster to produce, less effort to generate, and of high quality”, it said. Human and AI-written notes would be submitted into the same pool and X users would vote for which were most useful and should appear on the platform.

AI would draft “a neutral well-evidenced summary”, the research paper said. Trust in community notes “stems not from who drafts the notes, but from the people that evaluate them”, it said.

But Andy Dudfield, the head of AI at the UK factchecking organisation Full Fact, said: “These plans risk increasing the already significant burden on human reviewers to check even more draft notes, opening the door to a worrying and plausible situation in which notes could be drafted, reviewed, and published entirely by AI without the careful consideration that human input provides.”

Samuel Stockwell, a research associate at the Centre for Emerging Technology and Security at the Alan Turing Institute, said: “AI can help factcheckers process the huge volumes of claims flowing daily through social media, but much will depend on the quality of safeguards X puts in place against the risk that these AI ‘note writers’ could hallucinate and amplify misinformation in their outputs. AI chatbots often struggle with nuance and context, but are good at confidently providing answers that sound persuasive even when untrue. That could be a dangerous combination if not effectively addressed by the platform.”

Researchers have found that people perceive human-authoredcommunity notes as significantly more trustworthythan simple misinformation flags.

An analysis of several hundred misleading posts on X in the run-up to last year’s presidential election found that in three-quarters of cases, accurate community notes were not being displayed, indicating they were not being upvoted by users. These misleading posts, including claims that Democrats were importing illegal voters and the 2020 presidential election was stolen,amassed more than 2bn views, according to the Center for Countering Digital Hate.

Back to Home
Source: The Guardian