Millions of websites to get 'game-changing' AI bot blocker

TruthLens AI Suggested Headline:

"Cloudflare Introduces AI Bot Blocking System for Millions of Websites"

View Raw Article Source (External Link)
Raw Article Publish Date:
AI Analysis Average Score: 8.1
These scores (0-10 scale) are generated by Truthlens AI's analysis, assessing the article's objectivity, accuracy, and transparency. Higher scores indicate better alignment with journalistic standards. Hover over chart points for metric details.

TruthLens AI Summary

Millions of websites, including notable ones like Sky News, The Associated Press, and Buzzfeed, are set to gain the ability to block artificial intelligence (AI) bots from accessing their content without authorization. This new initiative is being implemented by Cloudflare, a major internet infrastructure firm that supports approximately 20% of the internet's traffic. The system aims to empower publishers by enabling them to request payment from AI companies for the use of their content, addressing a growing concern among creators who allege that AI firms have been training their systems on their work without consent or compensation. This issue has sparked significant debates, particularly in the UK, where prominent artists, including Sir Elton John, have raised alarms about copyright infringements related to AI technologies. Cloudflare's technology specifically targets AI bots, known as crawlers, which play a crucial role in how AI firms gather and process data from the web. Currently, Cloudflare's solution is operational on around one million websites, providing a potential framework for a more equitable relationship between content creators and AI companies.

Roger Lynch, the CEO of Condé Nast, expressed that this move is a "game-changer" for publishers, emphasizing its importance in establishing a fair value exchange on the internet that safeguards creators and promotes quality journalism. However, some experts caution that stronger legal protections will still be necessary to comprehensively protect creators' rights. Initially, this blocking system will be applied by default to new Cloudflare users and sites that have previously engaged in efforts to restrict crawlers. While many publishers permit search engine crawlers like Google to access their content in exchange for increased visibility, they are increasingly wary of AI crawlers that exploit content without directing traffic back to the original sources, thereby undermining revenue for content creators. Cloudflare is also working on a "Pay Per Crawl" model that would allow content creators to monetize their original works used by AI firms. Despite this progress, experts warn that the solution is limited, highlighting the need for broader legal reforms to ensure adequate protection against unauthorized AI usage of creative works.

TruthLens AI Analysis

You need to be a member to generate the AI analysis for this article.

Log In to Generate Analysis

Not a member yet? Register for free.

Unanalyzed Article Content

Millions of websites - including Sky News, The Associated Press and Buzzfeed - will now be able to block artificial intelligence (AI) bots from accessing their content without permission. The new system is being rolled out by internet infrastructure firm, Cloudflare, which hosts around a fifth of the internet. Eventually, sites will be able to ask for payment from AI firms in return for having their content scraped. Many prominent writers, artists, musicians and actors have accused AI firms of training systems on their work without permission or payment. In the UK, itled to a furious rowbetween the government and artists including Sir Elton John over how to protect copyright. Cloudflare's tech targets AI firm bots - also known as crawlers - programmes that explore the web, indexing and collecting data as they go. They are important to the way AI firms build, train and operate their systems. So far, Cloudflare says its tech is active on a million websites. Roger Lynch, chief executive of Condé Nast, whose print titles include GQ, Vogue, and The New Yorker, said the move was "a game-changer" for publishers. "This is a critical step toward creating a fair value exchange on the Internet that protects creators, supports quality journalism and holds AI companies accountable", he wrote in a statement. However, other experts say stronger legal protections will still be needed. Initially the system will apply by default to new users of Cloudflare services, plus sites that participated in an earlier effort to block crawlers. Many publishers accuse AI firms of using their content without permission. Recently the BBCthreatened to take legal actionagainst US based AI firm Perplexity, demanding it immediately stopped using BBC content, and paid compensation for material already used. However publishers are generally happy to allow crawlers from search engines, like Google, to access their sites, so that the search companies can in return can direct people to their content. Perplexity accused the BBC of seeking to preserve "Google's monopoly". But Cloudflare argues AI breaks the unwritten agreement between publishers and crawlers. AI crawlers, it argues, collect content like text, articles, and images to generate answers, without sending visitors to the original source—depriving content creators of revenue. "If the Internet is going to survive the age of AI, we need to give publishers the control they deserve and build a new economic model that works for everyone," wrote the firm's chief executive Matthew Prince. To that end the company is developing a "Pay Per Crawl" system, which would give content creators the option to request payment from AI companies for utilising their original content. According to Cloudflare there has been an explosion of AI bot activity. "AI Crawlers generate more than 50 billion requests to the Cloudflare network every day", the company wrote in March. And there is growing concern that some AI crawlers are disregarding existing protocols for excluding bots. In an effort to counter the worst offenders Cloudflare previously developed a system where the worst miscreants would besent to a "Labyrinth" of web pagesfilled with AI generated junk. The new system attempts to use technology to protect the content of websites and to give sites the option to charge AI firms a fee to access it. In the UK there isan intense legislative battlebetween government, creators and the AI firms over the extent to which the creative industries should be protected from AI firms using their works to train systems without permission or payment. And, on both sides of the Atlantic, content creators, licensors and owners have gone to court in an effort to prevent what they see as AI firms encroachment on creative rights. Ed Newton-Rex, the founder of Fairly Trained which certifies that AI companies have trained their systems on properly licensed data, said it was a welcome development - but there was "only so much" one company could do "This is really only a sticking plaster when what's required is major surgery," he told the BBC. "It will only offer protection for people on websites they control - it's like having body armour that stops working when you leave your house," he added. "The only real way to protect people's content from theft by AI companies is through the law."

Back to Home
Source: Bbc News