TThe Internet reacted strongly to an artificial intelligence-generated video of the famous theme from Leonardo Da Vinci’s work Mona Lisa painting singing along to a rap that actress Anne Hathaway wrote and performed.
The polarizing clip, which has sparked online reactions ranging from humor to horror, is one of the tricks of Microsoft’s new AI technology called VASA-1. The technology is capable of generating realistic talking faces of virtual characters using a single image and an audio clip of speech. AI can make characters from cartoons, photographs and paintings sing or talk, as evidenced in video footage Microsoft launched as part of to look for published on April 16.
In the most viral clip, the woman from Mona Lisa painting sings, its mouth, eyes and face moving, to the sound of “Paparazzi”, a rap that Hathaway wrote and carried out in Conan O’BrienMicrosoft’s talk show in 2011. In another Microsoft clip, an avatar sings, and in others generated from real photos, people talk about everyday topics.
The videos quickly gained traction online: a post on X, formerly Twitter, on April 18, featuring the corner Mona Lisa clip and others received seven million views as of Sunday.
Microsoft just abandoned VASA-1.
This AI can make a single image sing and speak expressively from audio reference. Similar to Alibaba’s EMO
10 wild examples:
1. Mona Lisa rapping with paparazzi pic.twitter.com/LSGF3mMVnD
-Min Choi (@minchoi) April 18, 2024
Online reactions were swift, strong and widespread. Some liked the clips, with commenter’s post that the Mona Lisa the video made them “roll on the floor laughing.” Others were more cautious or even disturbed. “This is wild, weird and scary all at the same time,” one said. “Another day, another terrifying AI video,” another lamented. “Why does this need to exist? I can’t think of any positives,” said one commentator criticized.
Microsoft researchers addressed the risks of the new technology and said they have no plans to release an online demo or product “until we are confident that the technology will be used responsibly and in accordance with appropriate regulations.”
“It is not intended to create content used to deceive or deceive,” the researchers wrote. “However, like other related content generation techniques, it can still be potentially misused to impersonate humans. We oppose any behavior to create misleading or harmful content from real people and are interested in applying our technique to advance counterfeit detection.”
“While recognizing the possibility of misuse, it is imperative to recognize the substantial positive potential of our technique,” they said. “The benefits – such as increasing educational equity, improving accessibility for individuals with communication difficulties, offering companionship or therapeutic support to those in need, among many others – underscore the importance of our research and other related explorations. We are committed to developing AI responsibly, with the aim of promoting human well-being.”
The latest development in AI comes at a time when governments around the world are struggling to regulate the new technology and legislate against its criminal use.
One example is deepfake pornography, in which an individual’s face is superimposed onto an explicit image or video without their consent, a problem that even affected Taylor Swift earlier this year. In the US, although 10 states criminalize deepfakes, federal law does not, and several bills have been introduced in Congress to correct this.
This story originally appeared on Time.com read the full story