Politics

Meta Action on Explicit Deepfakes Under Review by Oversight Board

Share on facebook
Share on twitter
Share on linkedin
Share on pinterest
Share on telegram
Share on email
Share on reddit
Share on whatsapp
Share on telegram



Meta’s Oversight Board will review two cases over how Facebook and Instagram handled content containing artificial intelligence (AI)-generated nude images of two famous women, the board announced Tuesday.

The council is requesting public comment on concerns surrounding AI deepfake pornography as part of its review of the cases.

One case concerns an AI-generated nude image made to look like an American public figure, which was automatically removed by Facebook after it was identified by a previous poster as violating Meta’s bullying and harassment policies.

The other case concerns an AI-generated nude image made to look like an Indian public figure, which Instagram did not initially remove after it was reported. The image was later removed after the board selected the case and Meta determined the content was left in “in error,” according to the board.

The board does not name the individuals involved to avoid further harm or risk of gender-based harassment, an Oversight Board spokesperson said.

The board, which is run independently of Meta and funded by a grant provided by the company, can issue a binding decision on content, but policy recommendations are non-binding and Meta has the final say on what it decides to implement.

The board is seeking public comment that addresses strategies for how Meta can address deepfake pornography, as well as the challenges of relying on automated systems that can shut down resources within 48 hours if no review is performed.

The case in India where a user reported the explicit deepfake was automatically closed because it was not reviewed within 48 hours. When the same user appealed the decision, the case was also automatically closed and the content remained up. This user then turned to the council.

A Meta spokesperson confirmed that both pieces of content chosen by the board had been removed and said the company “will implement the board’s decision once deliberation has finished.”

Concerns about how explicit deepfakes have spread have been amplified in recent months as AI has become more advanced and widespread.

In January, the spread of AI-generated explicit images of Taylor Swift urged lawmakers and the White House to push for action to mitigate the spread of deepfake pornography.

Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.



This story originally appeared on thehill.com read the full story

Support fearless, independent journalism

We are not owned by a billionaire or shareholders – our readers support us. Donate any amount over $2. BNC Global Media Group is a global news organization that delivers fearless investigative journalism to discerning readers like you! Help us to continue publishing daily.

Support us just once

We accept support of any size, at any time – you name it for $2 or more.

Related

More

1 2 3 9,595

Don't Miss