The AI Con by Emily M Bender and Alex Hanna review – debunking myths of the AI revolution

TruthLens AI Suggested Headline:

"Authors Critique AI Hype and Its Societal Implications in 'The AI Con'"

View Raw Article Source (External Link)
Raw Article Publish Date:
AI Analysis Average Score: 6.5
These scores (0-10 scale) are generated by Truthlens AI's analysis, assessing the article's objectivity, accuracy, and transparency. Higher scores indicate better alignment with journalistic standards. Hover over chart points for metric details.

TruthLens AI Summary

In their book, 'The AI Con', authors Emily M Bender and Alex Hanna critically examine the prevailing narratives surrounding artificial intelligence (AI) and its purported benefits. They express skepticism regarding the excitement surrounding AI, particularly the rhetoric used by political figures such as Keir Starmer, who recently introduced an 'AI opportunities action plan'. Bender and Hanna argue that what is marketed as AI is often a mere facade, designed to enrich a select few while undermining the creative efforts of many. They specifically highlight the shortcomings of large language models (LLMs), such as ChatGPT, describing them as 'synthetic text-extruding machines' that lack true understanding and often produce misleading or fabricated information. This raises concerns about the impact of generative AI on creative professionals, with surveys indicating significant job losses attributed to AI technologies. The authors contend that this shift not only threatens artists and writers but also has broader implications for society by potentially diminishing critical thinking skills as AI-generated content becomes more prevalent in everyday life.

Bender and Hanna also address the alarming trend of AI being used to replace human jobs, citing instances such as the National Eating Disorders Association's decision to replace hotline operators with a chatbot shortly after the operators sought to unionize. They reference a World Economic Forum report predicting that 40% of employers plan to downsize their workforce due to AI adoption. While acknowledging that there are legitimate applications for AI in fields like healthcare and energy management, the authors emphasize the need for scrutiny regarding the ethical implications of AI technologies. They caution against a future where decision-making is ceded to machines, as this could lead to a lack of accountability and a society dulled by an overreliance on AI. Ultimately, Bender and Hanna advocate for a critical evaluation of AI's role in our lives, urging society to consider both its potential benefits and the risks it poses to jobs, creativity, and human judgment.

TruthLens AI Analysis

The article critiques the prevailing narratives surrounding artificial intelligence (AI), particularly focusing on the recent optimistic viewpoints expressed by political figures such as Keir Starmer. The authors, Emily M Bender and Alex Hanna, argue against the hype of AI as a revolutionary force, suggesting instead that it represents a mere continuation of existing tech bubbles where the primary beneficiaries are a select few corporations.

Debunking AI Myths

Bender and Hanna employ a sarcastic tone to challenge the idea that AI, particularly large language models (LLMs) like ChatGPT, is genuinely transformative. They argue that these technologies merely replicate existing content without true understanding or creativity, likening them to "synthetic text-extruding machines." This analogy serves to diminish the perceived uniqueness of AI-generated outputs, framing them as inferior imitations rather than innovative advancements.

Concerns Over Intellectual Property

The authors highlight significant concerns regarding the impact of generative AI on creative professions. They cite a survey indicating that a notable percentage of creators have lost work opportunities due to the emergence of AI technologies. This raises questions about the ethical implications of using AI to generate content, particularly when it relies on datasets that may include copyrighted material.

Public Perception and Societal Impact

The article aims to shape public perception by implying that the enthusiasm surrounding AI is misguided. By portraying AI as a tool for exploitation rather than empowerment, it seeks to foster skepticism towards the promises of technological advancement. This perspective is likely to resonate with communities concerned about job displacement and the erosion of creative industries due to automation.

Economic and Political Ramifications

The discussion around AI's potential economic impact could have broader implications for policy and regulation. If the narrative shifts towards skepticism, it might influence how governments approach AI legislation, potentially leading to more stringent controls on data use and AI deployment. This could affect tech companies and investors, causing fluctuations in stock prices related to AI development.

Community Support

This critical view of AI may attract support from labor organizations, artists, and creative professionals who feel threatened by AI advancements. It seeks to engage those who are wary of unchecked technological growth and advocate for a more ethical approach to AI that prioritizes human creativity and labor rights.

Market Reactions

Given the contentious nature of the debate surrounding AI, this article could influence stock market perceptions, particularly for companies heavily invested in AI technologies. Companies like OpenAI, Meta, and others may face scrutiny as public sentiment shifts, potentially affecting their market performance.

Global Power Dynamics

The implications of AI extend beyond economic considerations, touching on global power dynamics. As nations grapple with AI's role in society, differing approaches to regulation could affect international relations, particularly between technologically advanced countries and those lagging in AI adoption.

The writing style of the article suggests a critical stance towards AI that aligns with ongoing discussions about its societal impact. The use of humor and sarcasm indicates an intention to provoke thought and discussion rather than present a neutral viewpoint.

In conclusion, the article offers a skeptical perspective on AI, challenging the dominant narrative of its revolutionary potential. It raises important questions about the implications of AI for creativity, employment, and societal structure. The reliability of the article can be attributed to its engaging critique of widely accepted beliefs, although the authors' biases may influence their conclusions.

Unanalyzed Article Content

At the beginning of this year, Keir Starmerannouncedan “AI opportunities action plan”, which promises to mainline AI “into the veins of this enterprising nation”. The implication that AI is a class-A injectable substance, liable to render the user stupefied and addicted, was presumably unintentional. But then what on earthdidthey mean about AI’s potential, and did they have any good reason to believe it?

Not according to the authors of this book, who are refreshingly sarcastic about what they think is just another tech bubble. What is sold to us as AI, they announce, is just “a bill of goods”: “A few major well-placed players are poised to accumulate significant wealth by extracting value from other people’s creative work, personal data, or labor, and replacing quality services with artificial facsimiles.”

Take the large language models (LLMs), such as ChatGPT, which essentially work like fancy auto­complete and routinelymake up citationsto nonexistent sources. They have been “trained” – as though they are lovable puppies – on vast databases of books as well as scrapings from websites. (Meta has deliberately ingested one such illegal database,LibGen, claiming it is “fair use”.) Meanwhile, “a survey conducted by the Society of Authors found that 26% of authors, translators, and illustrators surveyed had lost work due to generative AI.”

Better to think of LLMs, Bender and Hanna suggest, as “synthetic text-extruding machines”. “Like an industrial plastic process,” they explain, text databases “are forced through complicated machinery to produce a product that looks like communicative language, but without any intent or thinking mind behind it”. The same is true of other “generative” AI models that spit out images and music. They are all, the authors say, “synthetic media machines” – or, as I like to call them, giant plagiarism machines. “Both language models and text-to-image models will out-and-out plagiarize their inputs,” the authors write, noting that the New York Times issuingOpenAI for just this reason.

But reliance on AI is not just bad for artists in garrets; it’s bad for everyone, as Bender and Hanna persuasively argue. The fact that internet search results now start with an AI-generated summary, they point out, is likely to dull critical thinking – and not just because such summaries have in the past told people that they should eat rocks, but because “scanning a set of links gives us information about what information sources are available” and so builds “our understanding of the information landscape”.

The real appeal of AI, as the authors see it, is that it promises to enable the making of vast numbers of people redundant. They recount, for example, how the National Eating Disorders Association in the USreplacedtheir hotline operators with a chatbot days after the former voted to unionise. According to the World Economic Forum’s 2025 report, 40% of employers are planning to reduce staff headcounts as they adopt AI in the coming years.

I, for one, do not want to live in a cultural wasteland of AI-generated garbage. But, amusing as this book’s broadside against the giant plagiarism machines is, it tends to lump everything else that can be called “AI” in with them. And the authors do know better: “AI is a marketing term,” they note at the start. “It doesn’t refer to a coherent set of technologies.” They do allow, subsequently, that there are “sensible use-cases” for such tech, such as image processing that helps radiologists, but there are many more that go unmentioned.

Under a broader definition of “AI” as machine-learning systems, emerging tools can, according to a recent overview by the Economist, manage load on the electricity grid more effectively, cut the time required to inspect nuclear facilities and help reduce emissions in trucking, shipping, steelmaking and mining industries. The British engineer Demis Hassabis won the Nobel in chemistry last year for his company DeepMind’s work on protein folding, which may yet have profound applications in drugmaking. And, less glamorously, machines can now automatically transcribe doctors’ notes: an example these authors present as a reason it’s bad that AI is infiltrating the NHS, but surely one that is a win-win for doctors and patients alike.

Nevertheless, Bender and Hanna are right to insist that each such case should be scrutinised for its utility, the biases it might smuggle in, and its propensity to destroy jobs that depend on human judgment. They cite a famous old rule from IBM: “A computer can never be held accountable, therefore a computer must never make a management decision.” But that is precisely why some in power want to hand decision‑making capacity to computers: it promises a sunlit utopia of profit without blame. Once AI is mainlined into our veins, we may be too doped up to care.

Sign up toInside Saturday

The only way to get a look behind the scenes of the Saturday magazine. Sign up to get the inside story from our top writers as well as all the must-read articles and columns, delivered to your inbox every weekend.

after newsletter promotion

The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want by Emily M Bender and Alex Hanna is published by Bodley Head (£22). To support the Guardian, order your copy atguardianbookshop.com. Delivery charges may apply.

Back to Home
Source: The Guardian