Deepfakes, the “big cancer” of the AI field, is causing more and more industry backlash. On January 6, 2020, facebook’s vice president of global policy, Monika Bickert, said in a blog post that Facebook would remove Deepfake videos that met its criteria. It’s the latest move by Facebook to “manipulate media” on its platform.
Monika Bickert says:
While Deepfake videos are still rare lying on the Internet today, the increase in their numbers will pose a major challenge to the industry and society.
What is Deepfake?
Deepfake, you may have heard names more or less.
At the end of 2017, Deepfakes, a user of Social News in the US, used artificial intelligence technology to create a fake video that led a stir when Wonder Woman actress Cal Gadot’s face was grafted onto an adult movie actress.
In fact, face change is quite common in computer vision, and when it comes to AI face-changing methods, one topic that can’t be avoided is Cycle GAN, an important early attempt at face conversion. Derived from the deep learning model GAN (Generative Networks, which generates an adversarial network), Cycle GAN makes it easy to learn the transition relationship between two categories and is therefore naturally suitable for “image-to-image conversion.”
However, Cycle GAN ensures a semantic shift, but does not guarantee some details, and the two main points are that expressions can’t be matched one-to-one and there are mixed, meaningless pixels.
Unlike Cycle GAN, Deepfake is a deep autoclinker model (Autoencoder-Decoder) that trains models to recognize and restore their faces individually by using at least a few hundred photos of the source and target people. Finally, the source character’s photo is paired with the decoder of the target character to complete the conversion.
Of course, Deepfake has limitations – it can’t work on a small sample, that is, you can’t replace two faces with one or two photos, and the model’s training process can be resource-intensive.
Reddit officially sealed it because the face-changing video produced and uploaded by user Deeofake involved pornographic content and violated the privacy of others, and in retaliation, Deepfakes also made the AI code of the face-changing video public for free.
As soon as Pandora’s box opened, Deepfake began to have a stunning speed of development, realistic face-changing effects, and ultimately a negative impact on society. Deeptrace, a Dutch cybersecurity start-up, released a report on the status of Deepfakes in October 2019, noting that 96% of existing fake videos on the web involve pornography, and that all victims of Deepfake videos circulating on major sites For women, and entertainment celebrities have become the main target of Deepfakes videos.
In addition, Deepfake’s impact on politics is not to be underestimated, two of the most typical examples are:
In 2018, a video of Gabon’s President Ali Bongo’s New Year’s message, synthesized by Deepfake, caused a mutiny;
In 2019, videos of Malaysian Economy Minister Azmin Ali having sex with men stirred up political affairs in Malaysia (same-sex relationships are illegal in Malaysia, and Azmin Ali claims that the video was forged by Deepfakes technology and is a political conspiracy);
As Giorgio Patrini, founder and chief scientific officer of Deeptrace, a Dutch cybersecurity start-up, says:
Whether it’s video, audio, or text, multiple digital communication channels we’ve relied on and trusted have been disrupted.
Government, enterprise, academic community joint efforts
The development of Deepfakes video naturally causes panic and thought. As a result, major technology companies have also begun to join the academic community to take action to avoid deeper negative impact.
As Professor Lu Siwei of the School of Engineering and Applied Sciences at the State University of New York put it:
Addressing this issue requires the joint efforts of technicians, government agencies, the media, companies, and every online user.
On September 5, 2019, Facebook CTO Mike Schroepfer announced that Facebook is working with Microsoft to study ways to detect Deepfake at a number of universities, including MIT, Oxford University, Cornell University and Berkeley, as well as a non-profit The research organization Partnership on AI, whose members include large technology companies such as Google, Apple, Amazon and IBM, is also involved.
In addition, Facebook invested $10 million to launch the Deepfake Sexaminechallenge, which was officially launched at NeurIPS 2019 in Vancouver in late 2019. 20 spring ends, inspiring researchers and hobbyists to find solutions.
That month, Google AI open source Deepfake video detection dataset, which contains more than 3,000 live videos from 28 different scenes captured live by live actors over a year.
In early October 2019, Mark Warner and Marco Rubio of the U.S. Senate Intelligence Committee convened the heads of 11 technology companies, including Facebook, Twitter, YouTube, Reddit and LinkedIn, to accelerate industry standards and regulations Share, remove, and archive AI synthetic content. At the end of the month, Twitter released a draft of the platform’s proposed implementation and publicly solicited user input.
As mentioned at the beginning of this article, Facebook recently published a blog that unveiled a new policy for Deepfake videos.
Facebook will remove manipulative media that meets the following criteria, the blog said.
Over-edited or synthesized videos that mislead viewers into thinking that the characters in the video say things they didn’t actually say;
A video that is consolidated, replaced, or superimposed with AI or machine learning.
It’s worth noting that for videos that don’t meet these criteria, third-party fact checkers also review them (including Facebook’s more than 50 partners worldwide, which will be checked in more than 40 languages). If the video is rated false or partially false, Facebook will significantly reduce its spread in the News Feed stream, and users will be alerted when sharing the video.
At the same time, Facebook points out that the new policy does not apply to videos that involve copying or sarcastic content, and that only videos that have been cut or changed in the order of speech.
Facebook’s new policy is controversial
Paul Scharre, director of the CNAS Technology and National Security Program at the New American Security Center, a US think tank, has warned:
Deepfake is so destructive that it’s only a matter of time before it’s used to manipulate elections.
So as the U.S. presidential election approaches in 2020, Facebook’ move, the world’s largest social media platform, naturally takes political considerations into account.
It may not be a coincidence that Monika Bickert’s blog comes ahead of her upcoming congressional hearing on cyber manipulation and fraud – with just 10 months to go until the 2020 election, few people in the political class are satisfied with Facebook’s role in the democratic process. The announcement appears to be that Facebook is proving its capabilities to the authorities.
When it comes to politics, remember that a video of a 2019 speech by Nancy Pelosi, a Democratic U.S. senator, was mocked (the video doubled to 0.75X and Nancy Pelosi’s pitch was raised) to make it look like she was drunk. Soon, the video spread to social networks and had a huge impact on the image of the person concerned.
In response, Facebook said it would not delete the video because it was a regular clip and did not violate any of the platform’s rules, but because of its negative impact, it reduced the video’s spread on the platform.
The practice has drawn widespread criticism, and Facebook’s “New Deal” does not offer a convincing strategy for fake content similar to last year’s Nancy Pelosi video, so it is controversial and the main reasons for the controversy are the following:
First, the first criterion applies only to videos that “make the audience think that the person in the video said something they didn’t actually say”, which means it doesn’t include Deepfake videos that make the audience think the person in the video did something they didn’t actually do.
In fact, this is a big loophole. According to Facebook, Facebook doesn’t delete the video even if a Deepfake video shows a politician burning an American flag, attending a white nationalist rally, or shaking hands with a terrorist. While these are “extreme situations”, these are often the most important in foreign policy, national security and electoral integrity. Embarrassingly, a Facebook spokesman did confirm to OneZero, the US paid media outlet, that the platform would not delete the video in these cases.
Second, the second standard has limited effect in controlling the spread of false information, because at this stage the vast majority of such videos are still edited through video editing software, and by manually deleting background information or adjusting the sequence of words, video editors have reached the goal of “out of context”. As Paul Barrett, deputy director of the Stern Center for Business and Human Rights at New York University and an expert on political disinformation, puts it:
Under Facebook’s new policy, even fake videos are not deleted because the method of tampering is not advanced.
Third, Facebook’s “fact-checking policy” mentioned in its blog has serious limitations. According to Wired writer Ren?e DiResta:
One of the main problems with fact-checking policy is that video has been viral until the facts are established, and when it comes to politics, most people don’t believe the results of fact-checking, which can even escalate into partisanship.
A good example is that when Facebook’s fact check mechanism was launched in 2019, Nancy Pelosi’s video had gone viral and eventually not deleted, and the negative impact had not diminished; The team also has an inescapable responsibility (Nancy Pelosi in the video is talking about Donald Trump).
Fourth, the policy is only for video. Recently, Representative Paul Gosar, Republican of Arizona, posted a fake photo on Twitter of former President Barack Obama shaking hands with the Iranian president. Facebook confirmed that this was not indeed within the scope of the new policy.
Fifth, the biggest loophole in the policy is that it does not apply to videos involving imitation or satire. In fact, disinformation makers often use “sarcastic” labels as cover- and go through Facebook’s fact-checking process time and time again, spreading information that is clearly misleading. Unless Facebook clearly defines what is “sarcasm,” it’s not impossible for Deepfake videos to be confused by the name “sarcasm.”
Sixth, the policy may be bowing to the powerful. According to the New York Times, a Facebook spokesman emailed:
Facebook does not censor political speeches if they are in the public interest. If a politician violates the platform’s rules, we will assess it by weighing the “public interest value and risk of harm.”
In response, the New York Times argues that the definition of so-called “public interest values and risk of harm” is vague, but it is certain that politicians are a protected class. In this way, it seems easier for politicians to manipulate social media than the average person.
While criticism has been on the cards, some have expressed their positiveview of the policy.
Sam Gregory, project director for the nonprofit WITNESS, for example, says:
I think it’s a good policy, and before Deepfake becomes a common problem, it’s important that social platforms first make their positions clear and announce their specific policies.
In addition, OneZero believes that not doing well enough is better than not doing anything, at least Facebook’s new policy takes into account some of the typical Deepfake videos that might be possible, which is politically positive. True, the vast majority of Deepfake sits today, which involves pornography, not politics, but Facebook’s long-standing ban on nudity and pornography has worked well.
Facebook’s long-standing efforts by conglomerates, academia, and government sondonothings do demonstrate its social responsibility as the world’s largest social media, but facebook’s blog is like a ill-considered, ill-considered, deepfake that has hit so many women and celebrities. Hand in the answer in a hurry.
To effectively avoid a larger group of victims, it may be necessary to think twice.