Tech

Microsoft wants Congress to ban AI-generated deepfake scams

Share on facebook
Share on twitter
Share on linkedin
Share on pinterest
Share on telegram
Share on email
Share on reddit
Share on whatsapp
Share on telegram


Microsoft is asking members of Congress to regulate the use of AI-generated deepfakes to protect against fraud, abuse and manipulation. Microsoft Vice President and President Brad Smith is calling for urgent action from lawmakers to secure elections and protect seniors from fraud and children from abuse.

“While the technology sector and nonprofit groups have taken recent steps to address this issue, it has become clear that our laws will also need to evolve to combat deepfake fraud,” says Smith in a blog post. “One of the most important things the U.S. can do is pass a comprehensive deepfake fraud statute to prevent cybercriminals from using this technology to steal from ordinary Americans.”

Microsoft wants a “deepfake fraud statute” that will give law enforcement a legal framework to prosecute AI-generated scams and fraud. Smith is also calling on lawmakers to “ensure that our federal and state laws regarding child sexual exploitation and abuse and nonconsensual intimate images are updated to include AI-generated content.”

Microsoft had to implement more security controls for its own AI products after a loophole in the company’s Designer AI image creator allowed people to create explicit images of celebrities like Swift. “The private sector has a responsibility to innovate and implement safeguards that prevent the misuse of AI,” says Smith.

While the FCC has already banned robocalls with AI-generated voices, generative AI makes it easier to create fake audio, images and videos – something we are already seeing in the lead-up to the 2024 presidential election. Elon Musk shared a deepfake video spoofing Vice President Kamala Harris on X earlier this week in a post that appears to violate X’s own policies against synthetic and manipulated media.

Microsoft wants posts like Musk’s to be clearly labeled as deepfakes. “Congress should require vendors of AI systems to use state-of-the-art provenance tools to label synthetic content,” says Smith. “This is essential to building trust in the information ecosystem and will help the public better understand whether content is generated or manipulated by AI.”



Source link

Support fearless, independent journalism

We are not owned by a billionaire or shareholders – our readers support us. Donate any amount over $2. BNC Global Media Group is a global news organization that delivers fearless investigative journalism to discerning readers like you! Help us to continue publishing daily.

Support us just once

We accept support of any size, at any time – you name it for $2 or more.

Related

More

Florida sued over lab-grown meat ban

August 13, 2024
UPSIDE Foods, a company that produces lab-grown meat, filed a federal lawsuit Tuesday challenging Florida’s new ban on the production, distribution and sale of lab-grown meat. The processfiled
1 2 3 9,595

Don't Miss