The children's commissioner for England is calling on the government to ban apps which use artificial intelligence (AI) to create sexually explicit images of children. Dame Rachel de Souza said a total ban was needed on apps which allow "nudification" - where photos of real people are edited by AI to make them appear naked - or can be used to create sexually explicit deepfake images of children. She said the government was allowing such apps to "go unchecked with extreme real-world consequences". A government spokesperson said child sexual abuse material was illegal and that there were plans for further offences for creating, possessing or distributing AI tools designed to create such content. Deepfakes are videos, pictures or audio clips made with AI to look or sound real. Ina report published on Monday,Dame Rachel said the technology was disproportionately targeting girls and young women with many bespoke apps appearing to work only on female bodies. Girls are actively avoiding posting images or engaging online to reduce the risk of being targeted, according to the report, "in the same way that girls follow other rules to keep themselves safe in the offline world - like not walking home alone at night". Children feared "a stranger, a classmate, or even a friend" could target them using technologies which could be found on popular search and social media platforms. Dame Rachel said: "The evolution of these tools is happening at such scale and speed that it can be overwhelming to try and get a grip on the danger they present. "We cannot sit back and allow these bespoke AI apps to have such a dangerous hold over children's lives." Dame Rachel also called for the government to: Paul Whiteman, general secretary of school leaders' union NAHT, said members shared the commissioner's concerns. He said: "This is an area that urgently needs to be reviewed as the technology risks outpacing the law and education around it." It is illegal in England and Wales under the Online Safety Act to share or threaten to share explicit deepfake images. The government announced in Februarylaws to tackle the threat of child sexual abuse images being generated by AI,which include making it illegal to possess, create, or distribute AI tools designed to create such material. It said at the time that the Internet Watch Foundation - a UK-based charity partly funded by tech firms - had confirmed 245 reports of AI-generated child sexual abuse in 2024 compared with 51 in 2023, a 380% increase. Media regulator Ofcom published thefinal version of its Children's Code on Friday, which puts legal requirements on platforms hosting pornography and content encouraging self-harm, suicide or eating disorders, to take more action to prevent access by children. Websites must introduce beefed-up age checks or face big fines, the regulator said. Dame Rachel has criticised the code saying it prioritises "business interests of technology companies over children's safety". A government spokesperson said creating, possessing or distributing child sexual abuse material, including AI-generated images, is "abhorrent and illegal". "Under the Online Safety Act platforms of all sizes now have to remove this kind of content, or they could face significant fines," they added. "The UK is the first country in the world to introduce further AI child sexual abuse offences - making it illegal to possess, create or distribute AI tools designed to generate heinous child sex abuse material."
Call for ban on AI apps creating naked images of children
TruthLens AI Suggested Headline:
"Children's Commissioner Calls for Ban on AI Apps Creating Explicit Images of Minors"
TruthLens AI Summary
Dame Rachel de Souza, the children's commissioner for England, is advocating for a complete ban on artificial intelligence (AI) applications that generate sexually explicit images of children. She highlighted the alarming trend of 'nudification,' where AI manipulates real images to make individuals appear naked, as well as the creation of deepfake content that can involve minors. De Souza emphasized that the unchecked proliferation of these applications poses severe risks, particularly to young girls and women, who are disproportionately targeted by such technologies. According to her report, this creates an environment where children are increasingly fearful of online interactions, leading them to take precautionary measures similar to those they would adopt in real life to ensure their safety. The report suggests that children are avoiding sharing images online to mitigate the risks of being exploited or victimized by individuals who may misuse these technologies.
In response to these concerns, government representatives have acknowledged the illegality of child sexual abuse material and indicated plans for stricter regulations on AI tools designed for creating such content. The Online Safety Act already prohibits sharing explicit deepfake images, but the government is looking to enhance these measures further. Notably, the Internet Watch Foundation reported a staggering 380% increase in AI-generated child sexual abuse reports from 2023 to 2024, highlighting the urgent need for regulatory action. The media regulator Ofcom has also updated its Children's Code, mandating stringent age verification processes for platforms hosting harmful content. However, Dame Rachel criticized these measures for prioritizing the interests of technology companies over the safety of children. She called for immediate action to address the growing threat posed by AI applications that can manipulate images and create deepfake content, urging the government to take decisive steps to protect vulnerable children from these emerging dangers.
TruthLens AI Analysis
The article highlights a pressing issue regarding the use of artificial intelligence in creating sexually explicit images of children. Dame Rachel de Souza, the children's commissioner for England, advocates for a total ban on such applications that manipulate images to depict minors inappropriately. This alarming call to action raises critical questions about child safety and the rapid evolution of technology that outpaces regulatory measures.
Public Perception and Concerns
The article aims to generate concern among the public regarding the unregulated nature of AI applications that can produce harmful content. It portrays a scenario where children's safety is at risk due to the existence of these technologies, which are highlighted as disproportionately affecting girls. By emphasizing the vulnerabilities children face online, the article seeks to mobilize public support for regulatory changes.
Potential Hidden Agendas
There is speculation that the urgency expressed in the article might be a strategic move to divert attention from other pressing issues in technology regulation or societal problems. By focusing on the sensational aspects of AI and child safety, there could be an attempt to consolidate governmental power over tech regulation, possibly leading to broader implications for personal freedoms and privacy.
Manipulative Elements
The language used in the article is emotionally charged, aiming to provoke fear and urgency. Phrasing like "extreme real-world consequences" and portraying the technology as overwhelming can be considered manipulative. This approach could be analyzed as an effort to galvanize public pressure on policymakers.
Comparative Context
When compared to other news reports on technology and child safety, this article aligns with a growing narrative around the dangers of AI. The connections with broader concerns about digital privacy and the protection of vulnerable populations highlight a consistent theme in media coverage.
Potential Societal Impact
The implications of this news could ripple through various sectors, including education, technology, and politics. If the government responds decisively, it could lead to stricter regulations on AI development, impacting tech companies and their operations. This could also ignite further discussions about digital ethics and the responsibilities of social media platforms.
Target Audience
This article is likely to resonate more with parents, educators, and child advocacy groups. It seeks to create a coalition of concerned citizens who are motivated by child safety issues, thereby influencing public policy.
Market Reactions
In the context of financial markets, this news might affect tech stocks, particularly those involved in AI and social media. Companies that do not prioritize safety measures in their applications could face backlash, potentially influencing investor confidence.
Geopolitical Relevance
While the article primarily focuses on a national issue, it reflects broader global concerns about technology's role in society. The conversations around AI ethics and child protection are relevant in various countries, suggesting a growing international dialogue on these themes.
AI Influence in Reporting
It is possible that AI tools were employed in crafting the narrative, particularly in structuring the arguments or selecting emotionally impactful phrases. While the core message remains focused on child safety, the nuances introduced by AI could subtly influence the urgency and framing of the narrative.
In conclusion, the reliability of this news piece can be affirmed based on the credibility of the source and the urgent social issue it addresses. However, it is essential to recognize the emotional framing and potential agendas behind its publication.