An inaccurate AI-produced reading list recently published by two newspapers demonstrates just how easy it still is for publishers to circulate AI slop. The Chicago Sun-Times and the Philadelphia Inquirer last week published a summer reading insert produced by King Features, a Hearst Newspapers subsidiary that provides the pair with licensed content. While the insert included real authors, the recommended books were mostly fake. Ultimately, 404 Media found that a human writer had produced the list using ChatGPT and failed to fact-check it. “I do use AI for background at times but always check out the material first,” the insert’s writer told 404 Media. “This time, I did not and I can’t believe I missed it because it’s so obvious. No excuses.” OpenAI’s launch of ChatGPT more than two years ago kicked off an AI gold rush, resulting in a deluge of AI-infused tools aiming to help people find information online without sifting through lists of links. But that convenience comes at a cost, with AI chatbots continuing to offer incorrect or speculative responses. Newsrooms have adopted AI chatbots with some trepidation, aware that the technology opens up new opportunities, as well as potential high-profile blunders — all amid fears that AI could lead to job losses and eat into news outlets’ revenue sources. Not adopting the technology, however, means risking being left behind as others use AI to comb through enormous datasets, incubate ideas and help readers navigate complicated narratives. Though many major newsrooms have adopted AI guidelines since ChatGPT’s launch, the sheer size of some newsrooms’ staff, coupled with multiple external partnerships, complicates identifying where embarrassing AI blunders can occur. The insert incident exemplifies the myriad ways AI errors can be introduced into news products. Most supplements that the Sun-Times has run this year — from puzzles to how-to guides — have been from Hearst, Tracy Brown, the chief partnerships officer for Sun-Times parent Chicago Public Media, told CNN. However, whether it’s an insert or a full-length story, Brown stressed that newsrooms have to use AI carefully. “It’s not that we’re saying that you can’t use any AI,” she said. “You have to use it responsibly and you have to do it in a way that keeps your editorial standards and integrity intact.” Verify everything chatbots touch It’s precisely because AI is prone to errors that newsrooms must maintain the “fundamental standards and values that have long guided their work,” Peter Adams, a senior vice president of research and design at the News Literacy Project, told CNN. That includes being transparent about using AI in the first place. Many high-profile publishers have been candid about how their newsrooms use AI to bolster reporting. The Associated Press — considered by many within the news industry to be the gold standard for journalism practices, given how it has used AI for translation, summaries and headlines — has avoided gaffes by always including a human backstop. Amanda Barrett, the AP’s vice president of standards, told CNN that any information gathered using AI tools is considered unvetted source material, and reporters are responsible for verifying AI-produced information. The AP also checks that its third-party partners have similar AI policies. “It’s really about making sure that your standards are compatible with the partner you’re working with and that everyone’s clear on what the standard is,” Barrett said. Zack Kass, an AI consultant and former OpenAI go-to-market lead, echoed Barrett, telling CNN that newsrooms need to treat AI “like a junior researcher with unlimited energy and zero credibility.” This means that AI writing should be “subject to the same scrutiny as a hot tip from an unvetted source.” “The mistake is using it like it’s a search engine instead of what it really is: an improviser with a genius-level memory and no instinct for truth,” he added. AI news errors are rare, but embarassing High-profile AI mistakes in newsrooms, when they happen, tend to be very embarrassing. Bloomberg News’ AI summaries, for example, were announced in January and already have included several errors. The LA Times’ Insights AI in March sympathized with the KKK within 24 hours of its launch. And in January, Apple pulled a feature from its Apple Intelligence AI that incorrectly summarized push notifications from news outlets. That’s only recently. For years, newsrooms have struggled when AI has been allowed to proceed unchecked. Gannett in 2023 was forced to pause an AI experiment after several major errors in high school sports articles. And CNET in 2023 published several inaccurate stories. Still, as Felix Simon, a research fellow in AI and digital news at the University of Oxford’s Reuters Institute for the Study of Journalism, points out, “the really egregious cases have been few and far between.” New research innovations have reduced hallucinations, or false answers from AI, pushing chatbots to spend more time thinking before responding, Chris Callison-Burch, a professor of computer and information science at the University of Pennsylvania, told CNN. But they’re not infallible, which is how these incidents still occur. “AI companies need to do a better job communicating to users about the potential for errors, since we have repeatedly seen examples of users misunderstanding how to use technology,” Callison-Burch said. According to Brown, all editorial content at the Sun-Times is produced by humans. Looking forward, the newspaper will ensure that editorial partners, like King Features, uphold those same standards, just as the newspaper already ensures freelancers’ codes of ethics mirror its own. But the “real takeaway,” as Kass put it, isn’t just that humans are needed — it’s “why we’re needed.” “Not to clean up after AI, but to do the things AI fundamentally can’t,” he said. “(To) make moral calls, challenge power, understand nuance and decide what actually matters.”
More than 2 years after ChatGPT, newsrooms still struggle with AI’s shortcomings
TruthLens AI Suggested Headline:
"AI Errors Highlight Ongoing Challenges for Newsrooms Two Years After ChatGPT Launch"
TruthLens AI Summary
The recent incident involving an inaccurate AI-generated reading list published by the Chicago Sun-Times and the Philadelphia Inquirer highlights the ongoing challenges newsrooms face in managing the integration of artificial intelligence in their operations. The insert, produced by King Features, included actual author names but predominantly featured fictitious book titles, raising significant concerns about editorial integrity. A writer involved in the creation of the list acknowledged that they had used ChatGPT without proper fact-checking, which resulted in the publication of misleading information. This incident underscores the risks associated with harnessing AI tools, which, while offering convenience in processing vast amounts of information, can also lead to serious errors if not monitored closely. The proliferation of AI technologies since the launch of ChatGPT has prompted many news organizations to adopt AI guidelines, yet the complexity of managing large editorial teams and external partnerships complicates the oversight necessary to prevent such blunders from occurring.
As newsrooms continue to explore the potential of AI, they are confronted with the dual challenge of leveraging these tools for efficiency while maintaining high editorial standards. Experts stress the importance of transparency and human oversight in the use of AI, with organizations like the Associated Press setting a precedent by treating AI-generated content as unverified material requiring thorough validation. The potential for AI errors, although infrequent, remains a significant concern, as seen in various high-profile cases where erroneous AI outputs led to public embarrassment. Looking ahead, experts emphasize that the role of journalists is irreplaceable; humans are needed not only to correct AI mistakes but also to engage in critical thinking, moral reasoning, and nuanced understanding that AI cannot replicate. This ongoing dialogue about the responsible use of AI in journalism is crucial for preserving the integrity and trustworthiness of news media in an increasingly automated world.
TruthLens AI Analysis
The recent article highlights ongoing challenges faced by newsrooms in effectively integrating AI technologies like ChatGPT. It underscores a significant incident involving two newspapers that published a flawed AI-generated reading list, revealing the pitfalls of relying on artificial intelligence for content creation without adequate oversight. This incident serves as a case study illustrating the broader implications of AI's integration into journalism, reflecting both the promise and peril of these technologies.
Implications of AI Integration in Journalism
The release of ChatGPT marked a turning point for media outlets, leading to a surge in AI tools designed to streamline information retrieval and content generation. However, the reliance on these technologies has resulted in instances of misinformation, as showcased by the inaccurate reading list. This highlights a critical tension in the industry: while AI can enhance efficiency, it can also lead to significant errors if not properly managed.
Perception Management
The article contributes to shaping public perception regarding the reliability of AI in journalism. By presenting a concrete example of an AI failure, it raises questions about the credibility of news sources that employ these technologies. This discourse may foster skepticism among readers, prompting them to question the accuracy of AI-assisted journalism and the integrity of the sources they consume.
Transparency and Accountability
There is an underlying theme of the necessity for accountability within newsrooms. The acknowledgment of a human writer's oversight indicates a need for more rigorous fact-checking processes when using AI tools. This incident serves as a cautionary tale for other news organizations regarding the importance of maintaining journalistic standards in the age of AI.
Potential Consequences for the Industry
The repercussions of this incident could extend beyond public perception. Newsrooms may face increased scrutiny regarding their AI usage policies, leading to potential overhauls in how they approach technology in content creation. The fear of job losses due to AI could also exacerbate tensions within the industry, as employees might feel threatened by automation's encroachment on traditional roles.
Audience Engagement and Community Response
Certain demographics, particularly tech-savvy readers and those concerned about misinformation, may resonate more with this narrative. The article likely aims to engage those who value accurate journalism and are wary of technology's role in shaping news content. By addressing these concerns, the article seeks to foster a discourse around the ethical implications of AI in media.
Market and Economic Impact
The article may indirectly influence market perceptions regarding technology companies involved in AI development. Stakeholders might reassess the viability and reliability of investments in companies that provide AI solutions for the media industry. Additionally, news outlets that are perceived as mishandling AI may see a dip in audience trust, potentially affecting their revenue streams.
Global Context and Relevance
In the context of global discourse on AI, this article reflects broader societal concerns regarding the implications of technology on information dissemination. As AI continues to evolve, the challenges faced by newsrooms will likely remain pertinent to ongoing discussions about ethics, accountability, and the future of journalism.
AI Influence on Content Creation
It's plausible that AI tools were utilized in drafting aspects of the article itself, particularly in analyzing data or generating initial content structures. The language used may reflect AI's influence in framing the narrative around the challenges of AI in journalism, emphasizing the need for caution and diligence.
Manipulative Elements
The article does not overtly manipulate but rather raises awareness of a valid concern within the industry. By spotlighting a specific error, it aims to inform the public while advocating for more stringent practices in AI application, thus promoting a more informed reader base.
The reliability of the article can be considered moderate to high, given its factual basis in a specific incident and the inclusion of perspectives from industry professionals. However, the focus on a negative example may skew the perception of AI's overall potential in journalism.