Pennsylvania Private School Rocked by Deepfake Porn Scandal Targeting Girls

a sad girl by water
Pixabay/Pexels

A private school in Lancaster, Pennsylvania, has been forced to close its doors following an AI-generated nude photo scandal targeting nearly 50 female students.

Ars Technica reports that Lancaster Country Day School, a private institution serving approximately 600 students from pre-kindergarten through high school, has been rocked by a scandal involving AI-generated explicit images of nearly 50 female students. The school’s head, Matt Micciche, allegedly learned of the issue in November 2023 through an anonymous report submitted via a state-run portal called “Safe2Say Something.” However, Micciche allegedly failed to take action, allowing more students to be targeted for months until the police were notified in mid-2024.

The student accused of creating the harmful content was arrested in August, and their phone was seized as part of the ongoing investigation. Parents, outraged by the school’s failure to uphold mandatory reporting responsibilities, filed a court summons threatening to sue unless the responsible school leaders resigned within 48 hours.

As a result of the parents’ action, Micciche and the school board’s president, Angela Ang-Alhadeff, “parted ways” with the school, effective late Friday. Despite their resignations, parents seem determined to pursue the lawsuit, as the school leaders failed to meet the initial deadline. Classes were cancelled Monday and the future of the school remains unclear.

The scandal has had a significant impact on the school community, with more than half of the students staging a walkout last week, forcing the cancellation of classes. Students and some faculty members called for resignations and additional changes from the remaining leadership.

The incident highlights the growing concern over the proliferation of AI-generated explicit content and its potential to harm minors. Lawmakers are grappling with the issue, attempting to determine whether existing laws protecting children against abuse are sufficient to shield them from AI-related harms. Some proposed legislation seeks to criminalize the creation and sharing of harmful AI-generated content, with penalties that could include substantial fines and imprisonment.

However, progress on these proposed laws has been slow, and the United States appears to be lagging behind other countries in addressing this emerging threat. South Korea, for example, has taken a more proactive approach, launching a sustained crackdown on harmful AI-generated content and introducing tougher penalties for those involved in its production and distribution.

Read more at Ars Technica here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

COMMENTS

Please let us know if you're having issues with commenting.