In the run-up to the 2016 U.S. presidential election, Russians reportedly incited the mood of American voters by posting unspoyable messages on Facebook, YouTube and Twitter. After four years of preparation, the internet giants are now struggling to ensure that November’s election does not repeat the mistakes of the past.
They spend billions of dollars to improve website security, improve policies and processes. In recent months, fearing a possible post-election violence, the companies have taken steps to curb false information and focus on pushing authentic, verified content.
Here’s a quick overview of what Facebook, Twitter and YouTube will do before and after Tuesday’s election:
Before the election
Since 2016, Facebook has invested billions of dollars in strengthening its security operations to combat mising and other harmful content. The company says more than 35,000 people are currently working on the job.
The company appointed a former National Security Council agent to lead a team dedicated to the search for organized dissemination of false information. The team regularly reports on weekdays and remains on high alert on Tuesdays. Facebook is also working with government agencies and other technology companies to fend off foreign interference.
To make political ads more transparent, Facebook has created an ad library that allows people to know who, how much they spend, and what political ads they buy. The company has also introduced more restrictions on political advertising, such as the need for buyers to live in the United States. Facebook stopped accepting new requests for political ads on October 20th to prevent candidates from spreading bad information.
At the same time, the company is trying to push for accurate information. They launched a voter information center in June that provides data on when, how and where to vote, and will promote the feature in News Feeds throughout the day Tuesday. The company said it would take swift action against attempts to prevent people from voting. In addition, they restricted the forwarding of their WhatsApp chat service and began working with Reuters to process the confirmed election results.
Facebook is always optimizing for elections. They announced last week that they were turning off the recommendation feature for political and social groups and temporarily removed a feature from the Instagram hashtag page to slow the spread of mis-information.
On Tuesday, Facebook will set up an operations center with dozens of employees, known as “war rooms,” in an effort to ensure a smooth election. Facebook says the team is using virtual collaboration because of the new crown outbreak, but they’re already working and running smoothly.
Facebook’s app will also be tweaked on Tuesday. To prevent candidates from declaring victory prematurely or mistakenly, the company plans to add a notice at the top of its News Feed to keep people informed that the outcome of the election has not yet been determined without verification by news outlets such as Reuters and the Associated Press.
Facebook also plans to deploy special tools previously used in “high-risk countries” such as Myanmar to stop election-related violence if necessary. The key to these tools is to slow the spread of inflammatory content, but the company has not released details.
After the election
After the vote, Facebook plans to suspend all political ads from its social networks and photo-sharing site Instagram to reduce the spread of mis-information about the election results. Facebook has told advertisers that the ban could last a week, but the timetable is not fixed. Moreover, the company has not commented publicly on the duration of the ban.
“We spent years improving the security of our platform during the election,” said Kevin McAllister, a Facebook spokesman. We have learned from previous elections, formed new teams with different backgrounds of experience, developed many new products, and developed many new policies to deal with what might happen before and after Election Day. “
Before the election
Since 2016, Twitter has been trying to eliminate mis-information, and in some ways has gone far beyond Facebook. Last year, for example, the company banned political advertising entirely, arguing that the influence of political information “should be obtained by ability and not purchased with money”.
At the same time, If politicians are found to be spreading incorrect information or promoting violence, Twitter will tag the tweets. In May, they added fact-checking tags to Trump’s tweets about the “black lives” protests and postal ballots, and restricted users from sharing them.
In October, Twitter began experimenting with other technologies to slow the spread of mis-information. For example, the company adds background information to popular topics and limits users from forwarding content too quickly. While the changes are temporary, Twitter did not say when they would end.
The company also uses push notifications and banners in its app to warn people of common misinformed messages, including false claims about the reliability of postal ballots. The company has also expanded its partnerships with law enforcement agencies and government agencies to allow them to communicate error messages directly to Twitter.
In September, Twitter added an “election center” to filter information about polls, polls and candidates. The company said it would remove tweets that incited others to interfere with voters and polling stations, as well as tweets that intimidated voters from voting.
Yoel Roth, Twitter’s head of website integrity, said: “The entire company has mobilized to help us prepare for and respond to threats that may arise in the election. “
Twitter will do both on Tuesday: using computer algorithms and human analysts to eliminate false claims and crack down on bot networks that specialize in spreading false information, and on the other hand, to set up a team to focus on reliable information in the “Exploration and Trends” section.
Twitter plans to tag tweets announcing the candidate’s victory before authoritative sources announce the results. The company said candidates would not be able to use Twitter to celebrate their victory until at least two news outlets independently predicted the same results.
Twitter says people who want to know about the real-time election on Tuesday can find it at the Election Center.
After the election
Twitter will eventually resume its forwarding process without users adding background information. But many of the changes deployed in this election, such as banning political advertising and adding fact-checking labels, will become a formal process.
Before the election
For Google-owned YouTube, it’s not the 2016 presidential election that is really a wake-up call about harmful information dissemination. The turning point came in 2017, when a group of people were subjected to an inflammatory seration by an Islamist on YouTube before driving a van into pedestrians on London Bridge.
Since then, YouTube has embarked on a regulatory journey. They adjusted policies for misalicing and algorithms to suppress content in the “grey zone”.
They hire thousands of manual auditors to examine videos to improve algorithmic performance. A so-called intelligence desk, composed of analysts who had worked for government intelligence agencies, had also been created to monitor foreign forces and Internet trends.
Neal Mohan, YouTube’s chief product officer, says he brings employees together several times a week to discuss elections, but they are not holding their feet on the ground when it comes to adjusting policies and planning new models.
“We certainly take the election very seriously,” he said in an interview. The foundation work that really played an important role began three years ago, when we took on the responsibility of the globalization platform and started it seriously. “
Until Tuesday, the YouTube page will also help people understand how and where to vote through a variety of links.
Mr Mohan said he would regularly tour with his team on Tuesday to keep an eye on any unusual circumstances. However, YouTube does not have a “war room” and believes that the regular decision-making process is sufficient, and that most decisions about whether to delete or retain a video will be clear.
If more detailed decisions need to be made during the election period, Mr. Mohan said, they will report them to youTube’s top brass for a collective decision.
YouTube said it would remain on high alert for videos intended to disrupt the election. YouTube prohibits videos that mislead voters about voting methods and eligibility, or incite people to interfere in the voting process. The company said that even if the content came from a presidential candidate, they would quickly delete the video.
As the polls close, YouTube will sift through what they see as authoritative news sources and provide a live list of the election results. While YouTube does not provide a complete list of sources of information, the company says it wants the content to cover major broadcast networks as well as CNN and Fox news videos.
After the election
The company said it would display a fact-checking information panel starting Tuesday above election-related search results and below a video discussing the results. A decision on reservations will also be made in the future, as the case may be. The information panel displays a warning message that the results disclosed in the video may not be final. A link will also be provided to allow people to go to Google to see real-time results aggregated by the Associated Press.
Google said it would stop running election ads after the polls officially close. The policy, which also covers YouTube, will temporarily block all ads that refer to the 2020 election, candidates or election results. It was not immediately clear how long the ban would last.