Business

White House Presses Crackdown on Abusive AI Sexual Deepfakes

Share on facebook
Share on twitter
Share on linkedin
Share on pinterest
Share on telegram
Share on email
Share on reddit
Share on whatsapp
Share on telegram


PResident Joe Biden’s administration is pushing the technology industry and financial institutions to shut down a growing market in abusive sexual images made with artificial intelligence technology.

New generative AI tools have made it easier to turn someone’s image into a sexually explicit AI deepfake and share these realistic images in chat rooms or social media. Victims – whether they are celebrities or children – have little recourse to stop this.

The White House issued an appeal on Thursday seeking voluntary cooperation from companies in the absence of federal legislation. By committing to a set of specific measures, authorities hope that the private sector can curb the creation, dissemination and monetization of such non-consensual AI images, including explicit images of children.

“When generative AI arrived on the scene, everyone speculated about where the first real damage would come from. And I think we have the answer,” said Biden’s chief science adviser, Arati Prabhakar, director of the White House Office of Science and Technology Policy.

She described to the Associated Press a “phenomenal acceleration” of non-consensual images powered by AI tools and primarily targeting women and girls in a way that could change their lives.

“If you’re a teenager, if you’re a gay kid, these are the issues that people are facing right now,” she said. “We have seen an acceleration because of generative AI that is moving very fast. And the quickest thing that can happen is for companies to step up and take responsibility.”

A document shared with the AP ahead of its release Thursday calls for action not just from AI developers, but also payment processors, financial institutions, cloud computing providers, search engines and gatekeepers — namely Apple and Google – who control what reaches mobile app stores.

The private sector should step up to “stop the monetization” of image-based sexual abuse by restricting payment access, especially to sites that advertise explicit images of minors, the administration said.

Prabhakar said many payment platforms and financial institutions already say they will not support the types of companies that promote abusive images.

“But sometimes it’s not enforced; sometimes they don’t have those terms of service,” she said. “And this is an example of something that could be done much more rigorously.”

Cloud service providers and mobile app stores could also “restrict web services and mobile applications marketed for the purpose of creating or altering sexual images without individuals’ consent,” the document says.

And whether it’s AI-generated or a real nude photo posted online, survivors should be able to more easily get online platforms to remove them.

The best-known victim of fake pornographic images is Taylor Swift, whose ardent fan base reacted in January when abusive AI-generated images of the singer-songwriter began circulating on social media. Microsoft has promised to strengthen its protections after some of the Swift images were attributed to its AI visual design tool.

See more information: Taylor Swift Deepfakes Highlight Need for New Legal Protections

A growing number of schools in the US and elsewhere are also fighting AI-generated fake nudes depicting their students. In some cases, other teens were found to be creating AI-manipulated images and sharing them with classmates.

Last summer, the Biden administration brokered voluntary commitments from Amazon, Google, Meta, Microsoft and other major tech companies to place a series of safeguards on new AI systems before releasing them publicly.

This was followed by Biden’s signing in October of an ambitious executive order aimed at guiding how AI is developed so that companies can profit without putting public safety at risk. While focused on broader concerns about AI, including national security, it nodded to the emerging problem of AI-generated images of child abuse and found better ways to detect it.

But Biden also said the administration’s AI safeguards would need to be backed by legislation. A bipartisan group of U.S. senators is now pushing Congress to spend at least $32 billion over the next three years to develop artificial intelligence and fund measures to safely target it, although it has largely delayed calls for transform these safeguards into law.

Encouraging companies to step up and make voluntary commitments “does not change the underlying need for Congress to take action in this case,” said Jennifer Klein, director of the White House Gender Policy Council.

Longstanding laws already criminalize the production and possession of sexual images of children, even if they are false. Federal prosecutors filed charges earlier this month against a Wisconsin man who they say used a popular AI image generator, Stable Diffusion, to create thousands of realistic AI-generated images of minors engaged in sexual conduct. An attorney for the man declined to comment after his arraignment hearing Wednesday.

See more information: No one really knows how AI systems work. A new discovery could change that

But there is almost no oversight over the technological tools and services that make it possible to create such images. Some are on commercial websites that reveal little information about who runs them or the technology they are based on.

The Stanford Internet Observatory said in December that it found thousands of images of suspected child sexual abuse in the giant AI database LAION, an index of online images and captions that has been used to train top AI image creators, like Stable Diffusion.

London-based Stability AI, which owns the latest versions of Stable Diffusion, said this week that it “has not approved the release” of the earlier model allegedly used by the Wisconsin man. These open source models, because their technical components are publicly released on the Internet, are difficult to put back in the bottle.

Prabhakar said it’s not just open-source AI technology that’s causing harm.

“It’s a broader problem,” she said. “Unfortunately, this is a category that a lot of people seem to be using imagers for. And it’s a place where we just saw a huge explosion. But I think it’s not clearly divided into open source and proprietary systems.”

——

AP Writer Josh Boak contributed to this report.



This story originally appeared on Time.com read the full story

Support fearless, independent journalism

We are not owned by a billionaire or shareholders – our readers support us. Donate any amount over $2. BNC Global Media Group is a global news organization that delivers fearless investigative journalism to discerning readers like you! Help us to continue publishing daily.

Support us just once

We accept support of any size, at any time – you name it for $2 or more.

Related

More

1 2 3 6,144

Don't Miss

Celtics roster reset: Breakdown of C contracts after championship season

Celtics roster reset: Breakdown of C contracts after championship season

Watch IShowSpeed ​​tell police ‘I’m Cristiano Ronaldo’s son’ before being taken away as fans swarm the YouTuber in the Netherlands

YOUTUBER IShowSpeed ​​​​told police he was “Cristiano Ronaldo’s son” after