Deepfake porn is destroying real lives in South Korea

TruthLens AI Suggested Headline:

"Deepfake Technology Fuels Rise in Digital Sex Crimes in South Korea"

View Raw Article Source (External Link)
Raw Article Publish Date:
AI Analysis Average Score: 6.9
These scores (0-10 scale) are generated by Truthlens AI's analysis, assessing the article's objectivity, accuracy, and transparency. Higher scores indicate better alignment with journalistic standards. Hover over chart points for metric details.

TruthLens AI Summary

In South Korea, the emergence of deepfake technology has led to a distressing surge in digital sex crimes, particularly affecting women and students. Ruma, a university student, experienced this firsthand when explicit images of her face were manipulated onto naked bodies and circulated in a Telegram chat room. These images, accompanied by degrading comments and threats of further dissemination, shattered her sense of safety and trust. Despite the long-standing issue of revenge porn, the rise of AI-generated deepfakes poses a new and insidious threat, impacting individuals who have never shared intimate images. Reports indicate that between January and November of the previous year, over 900 individuals in schools reported incidents of deepfake pornography, prompting the South Korean education ministry to form an emergency task force and implement stricter penalties, including prison sentences for offenders. However, the police have struggled to act decisively, with only a fraction of deepfake-related cases resulting in arrests, leading victims like Ruma to take matters into their own hands to seek justice and support.

The societal response to these crimes has been mixed, with many victims facing skepticism and a lack of empathy from the public. Activist Won Eun-ji has worked tirelessly to expose perpetrators and provide assistance to victims, highlighting the need for systemic change in how digital sex crimes are perceived and prosecuted. As victims like Ruma and Kim share their harrowing experiences, they emphasize the importance of accountability for offenders and support for those affected. While recent actions by the South Korean government and platforms like Telegram signal a recognition of the urgency of the issue, many victims feel that true justice remains elusive. The ongoing battle against deepfake pornography underscores a larger societal challenge, as the victims continue to advocate for stronger protections and a cultural shift in attitudes toward digital exploitation and harassment.

TruthLens AI Analysis

The article highlights the alarming rise of deepfake pornography in South Korea and the devastating impact it has on individuals, particularly women. It sheds light on the personal experiences of victims, such as Ruma, and the broader societal issues related to digital sexual crimes. This discussion is timely, given the increasing prevalence of technology that enables such abuses.

Social Impact and Perception

There is a clear intention to raise awareness regarding the dangers of deepfake technology and its implications on personal safety and privacy. The narrative seeks to foster a sense of urgency among readers about the need for stronger protections against digital sexual exploitation. By sharing personal stories of victims, the article aims to evoke empathy and encourage public discourse about consent and privacy in the digital age.

Omissions and Hidden Agendas

While the article focuses on the victims and their experiences, it may gloss over the broader context of digital rights and technological advancements. There could be an underlying intention to push for stricter regulations on technology companies, which could be seen as a way to influence policy decisions. By not addressing potential solutions or the role of tech companies in mitigating these issues, the article may leave readers with a sense of helplessness.

Manipulative Elements

The article employs emotional language to draw attention to the severity of the issue, which may evoke strong reactions from the audience. This technique can be seen as manipulative, as it focuses heavily on personal tragedies without providing a balanced view of the technological landscape or discussing preventative measures. The emphasis on emotional distress over rational discourse may skew public perception and elicit outrage rather than constructive discussion.

Truthfulness of the Content

The reported experiences appear genuine, as they are backed by statistical data from the education ministry regarding the prevalence of deepfake incidents in schools. However, the narrative could benefit from a more comprehensive exploration of the issue, including perspectives from lawmakers, tech experts, and potential solutions.

Projected Consequences

The article could lead to increased public demand for legislative action and greater accountability for tech companies involved in the creation and distribution of deepfake material. This may result in stricter laws regarding digital content and non-consensual sharing, which could alter the landscape of digital rights in South Korea. Additionally, it may raise awareness on a global scale regarding the ramifications of emerging technologies.

Target Audience

This piece likely resonates with feminist groups, digital rights advocates, and those concerned about privacy issues. It appeals to individuals who are passionate about combating sexual violence and advocating for women's rights in the digital sphere.

Economic and Market Implications

While the article doesn’t directly discuss market impacts, it may influence companies involved in technology and social media. As public concern grows around digital privacy and security, companies may face pressure to invest in better safeguards, which could affect their operational costs and stock performance.

Geopolitical Relevance

The issues raised in the article reflect a global challenge regarding privacy, consent, and the impact of technology on society. As digital crimes increase, countries may need to collaborate on international regulations, positioning South Korea as a potential leader in addressing these challenges.

Use of AI in Content Creation

It is possible that AI tools were utilized in the writing process, especially in generating statistical data or analyzing public sentiment. However, the human element in storytelling is crucial for conveying emotional experiences, which suggests a balance between AI assistance and human narrative.

In conclusion, the article serves a critical purpose in highlighting the dangers of deepfake pornography and its impact on individuals and society. Its emotional appeal is powerful, though it may risk manipulation by omitting discussions on broader technological and legislative contexts. Such narratives can significantly shape public perception and drive societal change.

Unanalyzed Article Content

Ruma was having lunch on a summer day in 2021 when her phone began blowing up with notifications. When she opened the messages, they were devastating. Photos of her face had been taken from social media and edited onto naked bodies, shared with dozens of users in a chat room on the messaging app Telegram. The comments in screen shots of the chat room were demeaning and vulgar – as were the texts from the anonymous messenger who had sent her the images. “Isn’t it funny? … Watching your own sex video,” they wrote. “Tell me you honestly enjoy this.” The harassment escalated into threats to share the images more widely and taunts that police wouldn’t be able to find the perpetrators. The sender seemed to know her personal details, but she had no way to identify them. “I was bombarded with all these images that I had never imagined in my life,” said Ruma, who CNN is identifying with a pseudonym for her privacy and safety. While revenge porn – the nonconsensual sharing of sexual images – has been around for nearly as long as the internet, the proliferation of AI tools means that anyone can be targeted by explicit deepfakes, even if they’ve never taken or sent a nude photo. South Korea has had a particularly fraught recent history of digital sex crimes, from hidden cameras in public facilities to Telegram chat rooms where women and girls were coerced and blackmailed into posting demeaning sexual content. But deepfake technology is now posing a new threat, and the crisis is particularly acute in schools. Between January and early November last year, more than 900 students, teachers and staff in schools reported that they fell victim to deepfake sex crimes, according to data from the country’s education ministry. Those figures do not include universities, which have also seen a spate of deepfake porn attacks. In response, the ministry established an emergency task force. And in September, legislators passed an amendment that made possessing and viewing deepfake porn punishable by up to three years in prison or a fine of up to 30 million won (over $20,000). Creating and distributing non-consensual deepfake explicit images now has a maximum prison sentence of seven years, up from five. South Korea’s National Police Agency has urged its officers to “take the lead in completely eradicating deepfake sex crimes.” But of 964 deepfake-related sex crime cases reported from January to October last year, police made 23 arrests, according to a Seoul National Police statement. Legislator Kim Nam-hee told CNN that “investigations and punishments have been too passive so far.” So, some victims, like Ruma, are conducting their own investigations. Victims taking action Ruma was a 27-year-old university student when her nightmare first began. When she went to the police, they told her they would request user information from Telegram, but warned the platform was notorious for not sharing such data, she said. Once an outgoing student who enjoyed school and an active social life, Ruma said the incident had completely changed her life. “It broke my whole belief system about the world,” she said. “The fact that they could use such vulgar, rough images to humiliate and violate you to that extreme extent really damages you almost irrevocably.” She decided to act after learning that investigations into reports by other students had ended after a few months, with police citing difficulty in identifying suspects. Ruma and fellow students sought help from Won Eun-ji, an activist who gained national fame for exposing South Korea’s largest digital sex crime group on Telegram in 2020. Won agreed to help, creating a fake Telegram account and posing as a man in his 30s to infiltrate the chat room where the deepfake images had circulated. She spent nearly two years carefully gathering information and engaging other users in conversation, before coordinating with police to help carry out a sting operation. When police confronted the suspect, Won sent him a Telegram message. His phone pinged – he had been caught. Two former students from the prestigious Seoul National University (SNU) were arrested last May. The main perpetrator was ultimately sentenced to 9 years in prison for producing and distributing sexually exploitative materials, while an accomplice was sentenced to 3.5 years in prison. Police told CNN further investigations identified at least 61 victims, including 12 current and former SNU students. Seoul National University, in a briefing after the incident, said “the school will strengthen preventative education to raise awareness among the members of the university about digital sex crimes and do its best to protect victims and prevent recurrence.” Excerpts of the ruling shared by Ruma’s lawyers state, “The fake explicit materials produced by the perpetrator are repugnant, and the conversations surrounding them are shocking … They targeted victims as if they were hunting prey, sexually insulted the victims and destroyed their dignity by using photos from graduations, weddings, and family gatherings.” In response to the verdict, Ruma told CNN, “I didn’t expect the ruling to align exactly with the prosecution’s request. I’m happy, but this is only the first trial. I don’t feel entirely relieved yet.” Ruma’s case is just one of thousands across South Korea – and some victims had less help from police. Public often unsympathetic One high school teacher, Kim, told CNN she first learned she was being targeted for exploitation in July 2023, when a student urgently showed her Twitter screenshots of inappropriate photos taken of her in the classroom, focusing on her body. “My hands started to shake,” she recalled. “When could this photo have been taken, and who would upload such a thing?” CNN is identifying Kim by her last name only for her privacy and safety. But she said the situation worsened two days later. Her hair was made messy, and her body was altered to make it look like she was looking back. The manipulated picture of her face was added onto nude photos. The sophisticated technology made the images unnervingly realistic. Police told her that their only option to identify the poster was to request user information from Twitter, the social media platform bought by Elon Musk in 2022 and rebranded as X in 2023, with an emphasis on free speech and privacy. Kim and a colleague, also a victim of a secret filming, feared that using official channels to identify the user would take too long and launched their own investigation. They identified the person: a quiet, introverted student “someone you’d never imagine doing such a thing,” Kim said. The person was charged but regardless of what happens in court, she said life will never be the same. She said a lack of public empathy has frustrated her too. “I read a lot of articles and comments about deepfakes saying, ‘Why is it a serious crime when it’s not even your real body?’” Kim said. According to X’s current policy, obtaining user information involves obtaining a subpoena, court order, or other valid legal document and submitting a request on law enforcement letterhead via its website. X says it’s company policy to inform users that a request has been made. Its rules on authenticity state that users “may not share inauthentic content on X that may deceive people or lead to harm.” Pressure on social platforms to act Won, the activist, said that for a long time, sharing and viewing sexual content of women was not considered a serious offense in South Korea. Though pornography is banned, authorities have long failed to enforce the law or punish offenders, Won said. Societal apathy makes it easier for perpetrators to commit digital sex crimes, Won said, including what she called “acquaintance humiliation.” “Acquaintance humiliation” often begins with perpetrators sharing photos and personal information of women they know on Telegram, offering to create deepfake content or asking others to do so. Victims live in fear as attackers often know their personal information – where they live, work, and even details about their families – posing real threats to their safety and allowing anonymous users to harass women directly. Since South Korea’s largest digital sex exploitation case on Telegram in 2020, Won said the sexual exploitation ecosystem had fluctuated, shrinking during large-scale police investigations but expanding again once authorities ease off. The victims CNN interviewed all pushed for heavier punishment for perpetrators. While prevention is important, “there’s a need to judge these cases properly when they occur,” Kim said. Online platforms are also under pressure to act. Telegram, which has become a fertile space for various digital crimes, announced it would increase sharing user data with authorities as part of a broader crackdown on illegal activities. The move came after the company’s CEO Pavel Durov was arrested in August in France on a warrant relating to Telegram’s lack of moderation, marking a turning point for a platform long recognized for its commitment to privacy and encrypted messaging. Durov is under formal investigation but has been allowed to leave France, he said in a post on Telegram. Last September, South Korea’s media regulator said Telegram had agreed to establish a hotline to help wipe illegal content from the app, and that the company had removed 148 digital sex crime videos as requested by the regulator. Won welcomed this move, but with some skepticism – saying governments should remove the app from app stores, to prevent new users from signing up, if Telegram doesn’t show substantial progress soon. “This is something that has been delayed for far too long,” she said. In a statement to CNN, Telegram said the company “has a zero-tolerance policy for illegal pornography” and uses “a combination of human moderation, AI and machine learning tools and reports from users and trusted organizations to combat illegal pornography and other abuses of the platform.” A meaningful breakthrough occurred this January, marking the first time Korean authorities successfully obtained crime-related data from Telegram, according to Seoul police. Fourteen people were arrested, including six minors, for allegedly sexually exploiting over 200 victims through Telegram. The criminal ring’s mastermind had allegedly targeted men and women of various ages since 2020, and more than 70 others were under investigation for allegedly creating and sharing deepfake exploitation materials, Seoul police said. Meanwhile, victims told CNN they hope other women in their position can receive more support from the police and the courts going forward. “No matter how much punishments are strengthened, there are still far more victims who suffer because their perpetrators have not been caught, and that’s why it feels like the verdict is still far from being a true realization of change or justice,” Ruma said. “There’s a long way to go.”

Back to Home
Source: CNN