Deepfakes, Digital Abuse in School on the Rise
- tech360.tv
- 8 hours ago
- 2 min read
Educational institutions are currently contending with a significant increase in the number of students using artificial intelligence to transform ordinary photographs of their peers into sexually explicit deepfakes. This trend produces a distressing environment for young victims who must navigate the fallout of manipulated media spreading online. The Associated Press reported that the rise of artificial intelligence has made it easier for anyone to alter or create such images with little to no training or prior experience.

Recent events in the US, Louisiana middle school have highlighted the severity of the problem where fabricated nude images were circulated among the student body. Although two boys faced charges for their involvement in the incident, one of the victims was expelled after she confronted a student she believed was responsible for creating the images. Legal proceedings for similar offences have also occurred in Florida and Pennsylvania while students in California have faced expulsion. The issue extends beyond the student population as seen in Texas where a teacher was charged with using artificial intelligence to create abusive material involving his students.
The ease with which these images are produced has shifted significantly as the technology has evolved. Previously the process required specific technical skills to make fabricated content appear realistic but modern applications and social media tools now allow anyone to produce them with ease. Data from the National Centre for Missing and Exploited Children indicates that the number of AI-generated child abuse images reported to its cyber tipline rose from 4,700 in 2023 to 440,000 during the first half of 2025.
Legislative bodies are attempting to keep pace with these developments. By the year 2025 at least half of the states in the US had passed laws to address the use of generative artificial intelligence in the creation of realistic but fabricated images and sounds. Some of these regulations specifically target the production of simulated child sexual abuse material.
The psychological impact of these deepfakes is often more severe than traditional bullying because the images can go viral and resurface repeatedly. Victims frequently experience high levels of anxiety and depression and many find it difficult to prove the images are false because they appear entirely authentic. Experts suggest that schools must update their safeguarding policies to ensure that students understand they cannot act with impunity. Ignoring the problem is viewed by researchers as a dangerous approach that leaves children vulnerable to ongoing harm.
Parents are encouraged to speak with their children about the potential risks associated with this technology. Starting with lighthearted conversations about harmless fake videos can provide a gateway to discussing the serious consequences of creating or sharing harmful digital content. Response strategies for families include stopping the spread of images and notifying social media platforms after discussing the incident with a trusted adult. Gathering evidence without downloading the offensive material and providing direct support for victims are also considered vital steps in managing these complex situations.