Microsoft is asking members of Congress to regulate the use of AI-generated deepfakes to protect against fraud, abuse and manipulation. Microsoft Vice President and President Brad Smith is calling for urgent action from lawmakers to secure elections and protect seniors from fraud and children from abuse.
“While the technology sector and nonprofit groups have taken recent steps to address this issue, it has become clear that our laws will also need to evolve to combat deepfake fraud,” says Smith in a blog post. “One of the most important things the U.S. can do is pass a comprehensive deepfake fraud statute to prevent cybercriminals from using this technology to steal from ordinary Americans.”
Microsoft wants a “deepfake fraud statute” that will give law enforcement a legal framework to prosecute AI-generated scams and fraud. Smith is also calling on lawmakers to “ensure that our federal and state laws regarding child sexual exploitation and abuse and nonconsensual intimate images are updated to include AI-generated content.”
Microsoft had to implement more security controls for its own AI products after a loophole in the company’s Designer AI image creator allowed people to create explicit images of celebrities like Swift. “The private sector has a responsibility to innovate and implement safeguards that prevent the misuse of AI,” says Smith.
While the FCC has already banned robocalls with AI-generated voices, generative AI makes it easier to create fake audio, images and videos – something we are already seeing in the lead-up to the 2024 presidential election. Elon Musk shared a deepfake video spoofing Vice President Kamala Harris on X earlier this week in a post that appears to violate X’s own policies against synthetic and manipulated media.
Microsoft wants posts like Musk’s to be clearly labeled as deepfakes. “Congress should require vendors of AI systems to use state-of-the-art provenance tools to label synthetic content,” says Smith. “This is essential to building trust in the information ecosystem and will help the public better understand whether content is generated or manipulated by AI.”