Commissioner calls for ban on apps that make deepfake nude images of children

TruthLens AI Suggested Headline:

"Children's Commissioner Urges Ban on Deepfake Apps Generating Sexual Images of Minors"

View Raw Article Source (External Link)
Raw Article Publish Date:
AI Analysis Average Score: 7.7
These scores (0-10 scale) are generated by Truthlens AI's analysis, assessing the article's objectivity, accuracy, and transparency. Higher scores indicate better alignment with journalistic standards. Hover over chart points for metric details.

TruthLens AI Summary

In a recent report, the children's commissioner for England, Dame Rachel de Souza, has raised urgent concerns regarding the proliferation of artificial intelligence apps that generate deepfake sexual images of children. These so-called 'nudification' apps have sparked significant fear among teenage girls, leading many to refrain from sharing images on social media due to anxiety that their likeness could be manipulated to create explicit content. While the creation and distribution of sexually explicit images of minors is illegal, the technology facilitating such manipulations remains legally permissible. De Souza emphasized that children have expressed their fears regarding the potential misuse of these apps by peers or strangers, which could lead to severe emotional and psychological consequences. She is calling for an immediate ban on these applications, arguing that they have no constructive place in society and pose a serious threat to the safety of young individuals, particularly girls.

The report also highlights the broader societal implications of deepfake technology, including its contribution to a culture of misogyny and the psychological toll it can exert on victims. It cites alarming statistics, such as 26% of surveyed girls aged 13 to 18 having encountered sexually explicit deepfakes of various individuals, including themselves. De Souza advocates for the establishment of an AI bill that would compel developers to take responsibility for the risks associated with their products, as well as enforce age verification measures on these apps. Legal experts have noted a rise in cases involving minors engaging with deepfake technology, often without understanding the ramifications of their actions. This evolving landscape presents a challenge for law enforcement and parents alike, as it complicates the legal definitions of crimes committed by young people who may not fully grasp the consequences of their behavior. The government has acknowledged the issue and is working on measures to combat the creation and distribution of AI-generated child sexual abuse material, marking a significant step in addressing the threats posed by these technologies.

TruthLens AI Analysis

The article addresses a pressing issue regarding the safety of children in the digital age, focusing on the alarming use of artificial intelligence to create deepfake nude images of minors. It highlights the concerns of teenage girls who feel threatened by the potential misuse of technology that could exploit their images, thereby raising alarms about the adequacy of current legal frameworks in protecting them.

Societal Concerns and Psychological Impact

The article illustrates the psychological toll that these apps can have on young girls, leading them to refrain from sharing their images on social media. This reflects a broader societal concern about the impact of rapidly evolving technology on youth mental health and privacy. The children's commissioner emphasizes that the existence of such apps fosters fear and anxiety among children, which can lead to detrimental effects on their self-esteem and social interactions.

Legal and Regulatory Gaps

While it is illegal to create or distribute sexually explicit images of children, the technology that facilitates such actions remains unregulated. This highlights a significant gap in legislation that may inadvertently allow harmful practices to proliferate. The call for an AI bill and stricter regulations on generative AI tools aims to address these existing loopholes, suggesting a need for a proactive approach in safeguarding children online.

Public Awareness and Government Action

By urging the government to take decisive action against these apps, the article seeks to raise public awareness about the urgent need for reforms in the legislative landscape. This appeal to the government is aimed at mobilizing community support and encouraging a collective response to protect children from digital exploitation.

Manipulative Elements and Trustworthiness

The language used in the article aims to elicit a strong emotional response from readers, invoking concern for children's safety. While the article presents legitimate concerns, the framing of the issue could be seen as manipulative, particularly if it stokes fear without offering a balanced view of technological advancements. The overall trustworthiness of the article is supported by the credibility of the children’s commissioner and the documented experiences of young girls, though the emotive language raises questions about potential bias.

Connections to Broader Issues

This news piece can be linked to wider discussions about digital safety, child protection laws, and the ethical implications of AI technologies. It resonates with ongoing debates about the regulation of technology in society and the responsibilities of developers in creating safe applications.

Potential Societal and Economic Impact

If the government takes action based on these recommendations, it could lead to significant changes in the tech industry, compelling developers to prioritize child safety in their products. This may also spark broader conversations about the role of AI in society, influencing public policy and potentially affecting stock prices of tech companies involved in AI development.

Community Support and Target Audience

The article likely garners support from child advocacy groups, parents, and educators who are concerned about the safeguarding of children online. It appeals to those who prioritize children's rights and well-being in the face of technological advancements.

Market Considerations

In terms of market impact, companies involved in generative AI and social media may face scrutiny and potential regulatory changes. This news could affect investor sentiment toward these sectors, as increased regulation may influence profitability and operational practices.

Global Context and Relevance

From a global perspective, this issue aligns with current discussions about digital rights and the ethical use of AI technologies. The conversation is relevant today as societies grapple with the implications of rapid technological advancements on personal privacy and safety.

Use of AI in Journalism

It is plausible that AI tools may have aided in the drafting or editing of this article, particularly in analyzing data or generating insights based on existing reports. However, the emotional tone and advocacy stance strongly suggest human authorship aimed at raising awareness and prompting action.

In summary, the article serves as a call to action regarding the regulation of harmful technologies and reflects broader societal concerns about the safety of children in an increasingly digital world. It is a reliable source of information, given its grounding in real experiences and expert commentary, while also raising important questions about the balance between technological innovation and ethical responsibility.

Unanalyzed Article Content

Artificial intelligence “nudification” apps that create deepfake sexual images of children should be immediately banned, amid growing fears among teenage girls that they could fall victim, the children’s commissioner forEnglandis warning.

Girls said they were stopping posting images of themselves on social media out of a fear that generative AI tools could be used to digitally remove their clothes or sexualise them, according to the commissioner’s report on the tools, drawing on children’s experiences. Although it is illegal to create or share a sexually explicit image of a child, the technology enabling them remains legal, the report noted.

“Children have told me they are frightened by the very idea of this technology even being available, let alone used. They fear that anyone – a stranger, a classmate, or even a friend – could use a smartphone as a way of manipulating them by creating a naked image using these bespoke apps,” the commissioner, Dame Rachel de Souza, said.

“The online world is revolutionary and quickly evolving, but there is no positive reason for these particular apps to exist. They have no place in our society. Tools using deepfake technology to create naked images of children should not be legal and I’m calling on the government to take decisive action to ban them, instead of allowing them to go unchecked with extreme real-world consequences.”

De Souza urged the government to introduce an AI bill that would require developers of GenAI tools to address the risks their products pose, and to roll out effective systems to remove sexually explicit deepfake images of children. This should be underpinned by policymaking that recognises deepfake sexual abuse as a form of violence against women and girls, she suggested.

In the meantime, the report urges Ofcom to ensure that age verification on nudification apps is properly enforced and that social media platforms prevent sexually explicit deepfake tools being promoted to children, in line with the Online Safety Act.

The report cited a 2025 survey by Girlguiding, which found that 26% of respondents aged 13 to 18 had seen a sexually explicit deepfake image of a celebrity, a friend, a teacher, or themselves.

Many AI tools appear to only work on female bodies, which the report warned is fuelling a growing culture of misogyny.

One 18-year-old girl told the commissioner: “The narrative of Andrew Tate and influencers like that … backed by a quite violent and becoming more influential porn industry is making it seem that AI is something that you can use so that you can always pressure people into going out with you or doing sexual acts with you.”

The report noted that there is a link between deepfake abuse and suicidal ideation and PTSD, for example in the case of Mia Janin, who died by suicide in March 2021.

De Souza wrote in the report that the new technology “confronts children with concepts they cannot yet understand”, and is changing “at such scale and speed that it can be overwhelming to try and get a grip on the danger they present”.

Lawyers told the Guardian that they were seeing this reflected in an increase in cases of teenage boys getting arrested for sexual offences because they did not understand the consequences of what they were doing, for example experimenting with deepfakes, being in a WhatsApp chat where explicit images are circulating, or looking up porn featuring children their own age.

Danielle Reece-Greenhalgh, a partner at the law firm Corker Binning who specialises in sexual offences and possession of indecent images, said the law was “trying to keep up with the explosion in accessible deepfake technology”, which was already posing “a huge problem for law enforcement trying to identify and protect victims of abuse”.

She noted that app bans were “likely to stir up debate around internet freedoms”, and could have a “disproportionate impact on young men” who were playing around with AI software unaware of the consequences.

Reece-Greenhalgh said that although the criminal justice system tried to take a “commonsense view and avoid criminalising young people for crimes that resemble normal teenage behaviour … that might previously have happened behind a bike shed”, arrests could be traumatic experiences and have consequences at school or in the community, as well as longer-term repercussions such as needing to be declared on an Esta form to enter the US or showing up on an advanced DBS check.

Matt Hardcastle, a partner at Kingsley Napley, said there was a “minefield for young people online” around accessing unlawful sexual and violent content. He said many parents were unaware how easy it was for children to “access things that take them into a dark place quickly”, for example nudification apps.“They’re looking at it through the eyes of a child. They’re not able to see that what they’re doing is potentially illegal, as well as quite harmful to you and other people as well,” he said. “Children’s brains are still developing. They have a completely different approach to risk-taking.”

Marcus Johnstone, a criminal solicitor specialising in sexual offences, said he was working with an “ever-increasing number of young people” who were drawn into these crimes. “Often parents had no idea what was going on. They’re usually young men, very rarely young females, locked away in their bedrooms and their parents think they’re gaming,” he said. “These offences didn’t exist before the internet, now most sex crimes are committed online. It’s created a forum for children to become criminals.”

A government spokesperson said:“Creating, possessing or distributing child sexual abuse material, including AI-generated images, is abhorrent and illegal. Under the Online Safety Act platforms of all sizes now have to remove this kind of content, or they could face significant fines.“The UK is the first country in the world to introduce further AI child sexual abuse offences, making it illegal to possess, create or distribute AI tools designed to generate heinous child sexual abuse material.”

In the UK, the NSPCC offers support to children on 0800 1111, and adults concerned about a child on 0808 800 5000. The National Association for People Abused in Childhood (Napac) offers support for adult survivors on 0808 801 0331. In the US, call or text the Childhelp abuse hotline on 800-422-4453. In Australia, children, young adults, parents and teachers can contact the Kids Helpline on 1800 55 1800, or Bravehearts on 1800 272 831, and adult survivors can contact Blue Knot Foundation on 1300 657 380. Other sources of help can be found at Child Helplines International

Back to Home
Source: The Guardian