Politics

From fake nudes to incriminating audio, school bullying is becoming AI

Share on facebook
Share on twitter
Share on linkedin
Share on pinterest
Share on telegram
Share on email
Share on reddit
Share on whatsapp
Share on telegram



Students are quickly learning how easily AI can create harmful content, opening up a new world of bullying that neither schools nor the law are fully prepared for.

Educators watch in horror as false sexual images of their students are created, with fake voice recordings and videos also posing an imminent threat.

Advocates are sounding the alarm about the potential harm — and about gaps in both the law and school policies.

“We need to follow up and we have a responsibility, as people who support educators and support parents and families, as well as students themselves, to help them understand the complexity of dealing with these situations, so that they understand the context, they can learn to empathize and make ethical decisions about the use and application of these AI systems and tools,” said Pati Ruiz, senior director of educational technology and emerging technology at Digital Promise.

At Westfield High School in New Jersey last year, teenage boys used AI to create sexually explicit images of female classmates.

“In this situation, there were some boys or a boy — that is to be determined — who created, without the girls’ consent, inappropriate images,” said Dorota Mani, mother of one of the girls at the targeted school. told CNN at the time.

And in Pennsylvania, a mother allegedly created AI images of her daughter’s rival cheerleaders naked and drinking at a party before sending them to the coach, the BBC reported.

“The challenge at the time was that the district had to suspend [a student] because they really thought it was her – that she was naked and apparently smoking marijuana,” said Claudio Cerullo, founder of TeachAntiBullying.org.

Schools are struggling to respond to these cruel new uses of AI: the facility in Pennsylvania was unable to determine on its own that the images were fake and had to involve the police.

Even experts in the field are just beginning to understand the destructive power of AI in the schoolyard or locker room.

Cerullo is also part of Vice President Harris’ task force on cyberbullying and harassment, and said the group has been discussing the increased risk of suicide in teens due to cyberbullying and what policies need to be developed.

They are “reviewing procedures, working with local and state authorities when it comes to identifying AI standards and needs,” Cerullo said.

As it stands, even law enforcement has yet to find a clear path forward.

The Federal Trade Commission (FTC) proposed new protections against AI fakes in February, seeking to completely ban deepfakes, while the Department of Justice created an “artificial intelligence officer” position to better understand the new technology.

Last month, a bipartisan group of lawmakers released a report, endorsed by Senate Majority Leader Chuck Schumer (DN.Y.), on what Congress needs to address regarding AI, including deepfakes.

“This is sexual violence,” Rep. Alexandria Ocasio-Cortez (DN.Y.) said in a video last week promoting legislation to combat fake pornography.

“And what’s even crazier is that right now there are no federal protections for anyone, regardless of your gender, if you are a victim of non-consensual deepfake pornography,” added Ocasio-Cortez, who said she has personally been a victim of such deepfakes .

The bipartisan Challenge Act she endorsed was introduced in both chambers in March. This would create a federal civil right of action for victims of nonconsensual AI pornography so they could seek redress in court.

And while there have been discussions about what legal repercussions should be faced by those who use AI to harm others, the issue becomes even riskier when discussing minors.

“The way we approach this may need to go beyond enforcement, because it may not be palatable for us to say, well, you know, we’re going to ruin a bunch of these kids’ lives for what could really just be making a stupid mistake and experimenting.” , said Alex Kotran, co-founder and CEO of the AI ​​Education Project. “

“If the threshold for distribution, creation and distribution of child pornography is lowered by meeting a minor, taking photos of that person, uploading those photos, sharing them on your network – if it’s just a matter of typing a single text prompt or by uploading a single thing, a single image, it starts to feel like something children can do without realizing the enormity and gravity of what they are doing. And that’s why I think that, in addition to laws, we have to be able to establish clear norms as a society,” said Kotran.

And in trying to ensure that students don’t use AI for hostile purposes, concerns have arisen about excessive surveillance.

“I think we’re here really thinking about ethics and emphasizing the need to be responsible with the use of these technologies and also protecting student and community data and privacy while also managing the risks of cyberbullying that emerge with this new technology,” Ruiz said.

While deepfake images have grabbed the most headlines, they are far from the only risk AI poses to schools. An athletic director at Pikesville High School in Maryland created a fake AI voice recording of a principal to try to paint him as a racist and get him fired.

And deepfake videos will only become more accessible and realistic.

“I think treating this as a problem specific to child pornography or deepfake nudes is actually missing the forest for the trees. I think these are the issues where deepfakes are really peaking – it’s the most visceral – but I think the biggest challenge is how we can build the next iteration of digital literacy and digital citizenship with a generation of students who will have these tools at their disposal. really powerful,” said Kotran.

He raised concerns that the technology could reach a point where students would be afraid to share any photos of themselves online for fear it would be used for deepfakes.

“We have to try and overcome this challenge because I think it really takes a toll on children’s mental wellbeing and I see very few organizations or people who are really focused on this. And I just worry that this is no longer a future state, but rather a clear and present danger that needs to be addressed in advance,” Kotran said.



This story originally appeared on thehill.com read the full story

Support fearless, independent journalism

We are not owned by a billionaire or shareholders – our readers support us. Donate any amount over $2. BNC Global Media Group is a global news organization that delivers fearless investigative journalism to discerning readers like you! Help us to continue publishing daily.

Support us just once

We accept support of any size, at any time – you name it for $2 or more.

Related

More

1 2 3 9,595

Don't Miss