FTC says the technology behind audio Deepfakes is getting better and better

According tomedia The Verge, the rapid development of voice cloning technology makes it difficult to distinguish between real sounds in synthetic speech. A group of experts told an FTC seminar this week that while audio Deepfakes, which can trick people into giving up sensitive information, is a growing problem, the technology also has some legitimate uses.

FTC says the technology behind audio Deepfakes is getting better and better

“People have been imitating sound for years, but in recent years this technology has evolved to the point where it can clone sounds on a large scale using very small audio samples,” said Laura DeMartino, deputy director of litigation technology and analysis at the FTC. In its first public seminar on audio cloning, the FTC invited experts from academia, government, medicine, and entertainment to highlight the implications and potential harms of the technology.

FTC spokeswoman Juliana Gruenwald Henderson said after the seminar that the impostor was the first type of complaint the agency had received. “After learning that machine learning is rapidly improving the quality of voice cloning, we started organizing this workshop,” she said in an email. “

Mona Sedky, of the Justice Department’s computer crime and intellectual property division, says Deepfakes, which includes audio and video, make it easier for criminals to communicate anonymously. Sedky says communication-focused crime has traditionally been less attractive to criminals because it is difficult and time-consuming to achieve this goal. “It’s hard to pose convincingly,” she says. However, with the powerful Deepfakes audio and anonymity tools, you can communicate anonymously with people around the world. “

Sedky says audio clones can be weaponized, just as the Internet can be weaponized. “That doesn’t mean we shouldn’t use the Internet, but there may be things we can do, on the front end, that can be incorporated into the technology, making it more difficult to weaponize sound.” “

John Costello, director of the Enhanced Communications Program at Boston Children’s Hospital, says audio cloning is practical in patients who lose their voice. They can “store” audio samples that can then be used to create a composite version of their sound. “A lot of people want to make sure their voices sound real, so after they lose their voices, they want to be able to ‘say’ those sounds and sound like theirs,” he says.

Rebecca Damon, of the Screen Actors Association of the American Federation of Television and Radio Artists, says the concept of audio cloning raises a number of different questions for voice actors and performers, including agreeing to and compensating for using their voices. Voice actors may have contractual obligations where they hear them, she said, or may not want to use their voices in ways that don’t match their beliefs.

She added that for broadcast journalists, the unauthorized misuse or reproduction of their voices could affect their credibility. “A lot of times people get excited and impulsive about new technologies, and then don’t necessarily think about all the applications,” says Damon. “

While social media and its ability to disseminate audio and video deeps are often talked about, most of the participants at the seminar agreed that the most immediate audio concern was the focus on most consumers, over the phone.

Neil Johnson, an adviser to the Defense Advanced Research Projects Agency (DARPA), said: “Social media platforms are the front line, where they deliver, lock in, and spread. Voice-generating text-to-speech applications have a wide range of valuable applications. But Johnson cites the example of a British company that was extorted about $220,000 because someone had imitated the CEO’s voice in a scam.

Patrick Traynor, of the Herbert Wertheim School of Engineering at the University of Florida, says the complexity surrounding phone scams and audio deeps is likely to continue to improve. “Ultimately, it will be a combination of technologies that will get us to our destination,” he says, “to combat and detect synthetic or fake sounds.” Traynor adds that the best way to determine whether a caller is the person they call is a tried-and-tested way: “Hang up and call back.” Unless it’s possible to re-transfer calls from a national actor or a very, very sophisticated hacker organization, it’s probably the best way to figure out if you’re talking to someone you think you’re talking to. “