‘Dangerous nonsense’: AI-authored books about ADHD for sale on Amazon

TruthLens AI Suggested Headline:

"Concerns Rise Over AI-Generated ADHD Books Sold on Amazon"

View Raw Article Source (External Link)
Raw Article Publish Date:
AI Analysis Average Score: 8.6
These scores (0-10 scale) are generated by Truthlens AI's analysis, assessing the article's objectivity, accuracy, and transparency. Higher scores indicate better alignment with journalistic standards. Hover over chart points for metric details.

TruthLens AI Summary

Amazon has come under scrutiny for selling books that purport to provide expert advice on managing ADHD, which are allegedly authored by AI chatbots, such as ChatGPT. These AI-generated works have flooded the marketplace due to their low cost and ease of publication, yet they often contain misleading or potentially harmful information. Titles like 'Navigating ADHD in Men: Thriving with a Late Diagnosis' and 'Men with Adult ADHD: Highly Effective Techniques for Mastering Focus' have been flagged as entirely produced by AI, with a US-based company reporting a 100% AI detection score for each. Experts warn that the lack of regulation in online marketplaces allows dangerous misinformation to proliferate, particularly in health-related contexts, which can lead to misdiagnosis or exacerbate existing conditions. The generative AI systems behind these books have been criticized for their inability to reliably curate knowledge, as they are trained on a combination of credible medical texts and dubious sources, including pseudoscience and conspiracy theories.

The ethical responsibility of platforms like Amazon to prevent the sale of harmful content is a growing concern among academics and health professionals. While it is challenging to hold booksellers accountable for the content of every book, experts argue that the absence of regulations specifically targeting AI-authored works creates an environment where misinformation can thrive. Individuals like Richard Wordsworth have shared disturbing experiences of encountering AI-generated ADHD guides that include inaccurate and harmful advice, highlighting the potential risks to vulnerable readers. Amazon has stated that it has guidelines in place to regulate the content sold on its platform and that it actively works to detect and remove non-compliant books. However, the effectiveness of these measures remains in question as the marketplace evolves and the race for profit continues, potentially at the expense of consumer safety and well-being.

TruthLens AI Analysis

The article highlights a troubling trend in the publishing landscape, particularly focusing on the presence of books about ADHD on Amazon that are allegedly authored by AI chatbots. It raises concerns about the reliability of information provided by these AI-generated texts, especially in sensitive areas such as mental health. This raises broader questions about the implications of AI in content creation and the responsibilities of platforms selling such materials.

Concerns Over Misinformation

The proliferation of AI-generated books on platforms like Amazon has sparked significant concern among experts. The books claiming to provide expert advice on ADHD are scrutinized for potentially offering harmful or misleading information. The article cites the findings of Originality.ai, which confirmed that these books were entirely AI-generated. The call for regulation in this "wild west" of online marketplaces reflects a growing anxiety about the unchecked spread of misinformation.

Expert Opinions

Experts, such as Michael Cook from King’s College London, emphasize the risks associated with relying on generative AI for health-related advice. The capabilities of AI systems to synthesize information from diverse sources—including pseudoscience—raise alarms about their reliability. This concern is amplified when such information can lead to serious health consequences, including misdiagnosis or worsening of conditions.

Public Perception

The article aims to raise awareness among the public about the potential dangers of relying on AI-generated content for critical health-related information. By highlighting the lack of regulation and the risks involved, it seeks to create a sense of urgency for better oversight in the publishing industry. This could lead to increased skepticism towards AI-generated works and a demand for more trustworthy sources of information.

Comparative Context

When compared to other discussions around AI and misinformation, this article fits within a broader narrative that questions the quality and source of information in our digital age. It aligns with ongoing debates about the ethics of AI and the challenges faced by traditional media in maintaining credibility against the backdrop of rapidly evolving technology.

Potential Societal Impact

The implications of this trend could be far-reaching. If consumers begin to distrust AI-generated content, it may impact not only the publishing industry but also the tech sector's reputation. This could lead to increased calls for regulatory measures and standards for AI content creation. Furthermore, there might be a shift in the market dynamics, with platforms prioritizing human-authored works or implementing stricter guidelines for AI-generated publications.

Target Audience

The article appears to resonate more with health professionals, educators, and individuals concerned about mental health issues. It seeks to inform a broader community about the risks associated with unregulated AI content, thereby appealing to those who prioritize accuracy and reliability in information dissemination.

Financial Implications

In terms of financial markets, the conversation around AI and misinformation may influence tech stocks, particularly companies involved in AI development and publishing platforms. If regulatory measures are anticipated or implemented, it could affect investor confidence in AI-related businesses, leading to volatility in their stock prices.

Geopolitical Relevance

While the article does not explicitly discuss geopolitical implications, the issues surrounding AI ethics and misinformation are increasingly relevant in global discourse. Countries navigating the complexities of AI regulation might draw lessons from such discussions to inform their policies, reflecting a growing recognition of AI's impact on society.

The potential for AI to influence public understanding and behavior is significant, and with the increasing integration of AI into everyday life, discussions like those presented in the article will likely amplify. The concerns raised about the reliability of information from AI sources are valid, and the push for transparency and accountability in AI-generated content is essential for safeguarding public trust.

Overall, the reliability of this article is high, given its basis in expert opinions and documented findings regarding the nature of the books discussed. The article effectively communicates the urgency of the issue while encouraging a critical view of AI-generated content.

Unanalyzed Article Content

Amazon is selling books marketed at people seeking techniques to manage their ADHD that claim to offer expert advice yet appear to be authored by a chatbot such as ChatGPT.

Amazon’s marketplace has been deluged with AI-produced works that are easy and cheap to publish, but which include unhelpful or dangerous misinformation, such asshoddy travel guidebooksand mushroom foraging books thatencourage risky tasting.

A number of books have appeared on the online retailer’s site offering guides to ADHD that also seem to be written by chatbots. The titles includeNavigating ADHD in Men: Thriving with a Late Diagnosis,Men with Adult ADHD: Highly Effective Techniques for Mastering Focus, Time Management and Overcoming Anxietyand Men with Adult ADHD Diet & Fitness.

Samples from eight books were examined for the Guardian by Originality.ai, a US company that detects AI content. The company said each had a rating of 100% on its AI detection score, meaning that its systems are highly confident that the books were written by a chatbot.

Experts said online marketplaces are a “wild west” owing to the lack of regulation around AI-produced work – and dangerous misinformation risks spreading as a result.

Michael Cook, a computer science researcher at King’s College London, said generative AI systems were known to give dangerous advice, for example around ingesting toxic substances, mixing together dangerous chemicals or ignoring health guidelines.

As such, it is “frustrating and depressing to see AI-authored books increasingly popping up on digital marketplaces” particularly on health and medical topics, which can result in misdiagnosis or worsen conditions, he said.

“Generative AI systems like ChatGPT may have been trained on a lot of medical textbooks and articles, but they’ve also been trained on pseudoscience, conspiracy theories and fiction.

“They also can’t be relied on to critically analyse or reliably reproduce the knowledge they’ve previously read – it’s not as simple as having the AI ‘remember’ things that they’ve seen in their training data. Generative AI systems should not be allowed to deal with sensitive or dangerous topics without the oversight of an expert,” he said.

Yet he noted Amazon’s business model incentivises this type of practice, as it “makes money every time you buy a book, whether the book is trustworthy or not”, while the generative AI companies that create the products are not held accountable.

Prof Shannon Vallor, the director of the University of Edinburgh’s Centre for Technomoral Futures, said Amazon had “an ethical responsibility to not knowingly facilitate harm to their customers and to society”, although it would be “absurd” to make a bookseller responsible for the contents of all its books.

Problems are arising because the guardrails previously deployed in the publishing industry – such as reputational concerns and the vetting of authors and manuscripts – have been completely transformed by AI, she noted.

This is compounded by a “wild west” regulatory environment in which there are no “meaningful consequences for those who enable harms”, fuelling a “race to the bottom”, she said.

At present, there is no legislation that requires AI-authored books to be labelled as such. Copyright law only applies if a specific author’s content has been reproduced, although Vallor noted that tort law should impose “basic duties of care and due diligence”.

The Advertising Standards Agency said AI-authored books cannot be advertised to give a misleading impression that they are written by a human, enabling people who have seen such books to submit acomplaint.

Richard Wordsworth was hoping to learn about his recent adult ADHD diagnosis when his father recommended a book he found on Amazon after searching “ADHD adult men”.

When Wordsworth sat down to read it, “immediately, it sounded strange,” he said. The book opened with a quote from theconservative psychologist Jordan Petersenand then contained a string of random anecdotes, as well as historical inaccuracies.

Some advice was actively harmful, he observed. For example, one chapter discussing emotional dysregulation warned that friends and family “don’t forgive the emotional damage you inflict. The pain and hurt caused by impulsive anger leave lasting scars.”

When Wordsworth researched the author he spotted a headshot that looked AI-generated, plus his lack of qualifications. He searched several other titles in the Amazon marketplace and was shocked to encounter warnings that his condition was “catastrophic” and that he was “four times more likely to die significantly earlier”.

He felt immediately “upset”, as did his father, who is highly educated. “If he can be taken in by this type of book, anyone could be – and so well-meaning and desperate people have their heads filled with dangerous nonsense by profiteering scam artists while Amazon takes its cut,” Wordsworth said.

An Amazon spokesperson said: “We have content guidelines governing which books can be listed for sale and we have proactive and reactive methods that help us detect content that violates our guidelines, whether AI-generated or not. We invest significant time and resources to ensure our guidelines are followed and remove books that do not adhere to those guidelines.“We continue to enhance our protections against non-compliant content and our process and guidelines will keep evolving as we see changes in publishing.”

Back to Home
Source: The Guardian