1 in every 10 minors uses AI to generate nude images of their classmates & share online, finds survey

3 weeks ago 7

The survey’s findings serve as a stark reminder of the growing dangers posed by generative AI technologies in the hands of minors. In several cases, particularly ini the US, Several students used AI to create inappropriate images of their teachers and classmates read more

1 in every 10 minors uses AI to generate nude images of their classmates & share online, finds survey

Image credit: Pexels

A new survey has brought to light a troubling trend among minors, revealing that one in every 10 children is involved in using AI technology to generate non-consensual nude images of their classmates.

The findings, released by Thorn, a non-profit organization focused on protecting children from sexual exploitation, underscore the growing misuse of AI tools among young people, particularly within schools.

A Rising Concern
The survey, conducted online between November 3 and December 1, 2023, included 1,040 minors aged 9 to 17. These participants, from diverse backgrounds, were questioned about their experiences with child sexual abuse material (CSAM) and other harmful online activities. The results paint a concerning picture of how AI technologies, particularly “nudify” apps, are being misused by children to create fake nude images of their peers.

These findings have sparked significant alarm among parents, educators, and child protection advocates, as they highlight the ease with which minors can access and use these AI tools for malicious purposes.
The survey also revealed that one in seven minors admitted to sharing self-generated CSAM, reflecting a broader trend of risky online behaviour among young people. Although some of these actions might be seen as adolescent misbehaviour, the serious implications for the victims cannot be overlooked.

Study under fire
Thorn, the organization behind the survey, has faced its share of controversy. The non-profit has been scrutinized for its past work in developing tools for law enforcement, which some privacy experts have criticized. Additionally, the organization’s founder, Ashton Kutcher, stepped down last year following backlash for supporting a convicted rapist.

Despite these controversies, Thorn continues to work with major tech companies like Google, Meta, and Microsoft, aiming to combat AI-generated child sexual abuse material (AIG-CSAM). However, the persistence of harmful AI-generated content on these platforms has raised questions about the effectiveness of these partnerships.

Against AI-Driven Harm
The survey’s findings serve as a stark reminder of the growing dangers posed by generative AI technologies in the hands of minors. Recent incidents, such as investigations in Washington State and Florida where students used AI to create inappropriate images of their teachers and classmates, highlight the real-world consequences of this digital abuse.

As the report concludes, the need for proactive measures to address these risks is clear. While technology plays a significant role in facilitating these harmful behaviours, the underlying issue lies in the behaviours themselves. The survey calls for open discussions about the dangers of “deepfake nudes” and the establishment of clear boundaries regarding acceptable behaviour in schools and communities, irrespective of the tools being used.

The survey underscores the importance of educating both minors and adults about the potential harms of AI misuse, emphasizing that the consequences for victims are serious and far-reaching. The findings challenge society to take decisive action to curb these dangerous trends before they escalate further.

Read Entire Article