Facebook’s battle with “fake news” has entered a new phase. It has launched a “fake news” claim sings feature for the Singapore region. At the bottom of the article, which contains false content, Facebook posts a statement: “In accordance with relevant laws and regulations, It is necessary for Facebook to inform you that the Singapore Government says this article contains false information.”
The proliferation of false information has plagued Facebook for years, and it seems to be taking a step forward, driven by the Singapore government. But the problem was not solved quickly, and the ensuing trouble seemed to cause it more trouble.
Fake News Act
In April 2017, Singapore’s Minister of Justice, Shan Mugen, first suggested that Singapore’s laws need to be reviewed to combat the spread of fake news on social media at a legislative level. Eight months later, the relevant legislative matters entered the public hearing process. In April, the first version of the Internet False And Misleading Information Prevention Act was introduced, and on May 8th it was passed by an overwhelming margin of 72 to 9.
The Cyber False And Misleading Information Prevention Act, which went into effect in October, requires media platforms such as Facebook to issue correction notices or delete content for false information. On November 29, Facebook first used the fake news claim feature.
The author of the article, labeled “fake news”, was an opponent of the Singapore authorities, who accused the Government of using improper ways to arrest dissidents, but clarified that ‘no arrests have been made’. The government then urged Facebook to mark the post as ‘fake news’.
Facebook’s fake news problem sparks a wide range of issues, including privacy violations, election interference, and hate speech-fuelled violence. In April 2018, Zuckerberg appeared before the U.S. Congress for a hearing explaining how “Facebook’s user data was shared with the political consulting firm Cambridge Analytica, which ultimately influenced the U.S. election.”
Facebook is a company built on “relationships” and “content.” From his first day in the day, Zuckerberg wanted Facebook to replicate a real society in the virtual world of the Internet. It contains what everyone sees and hears, and the inextricable relationships between people.
Facebook has done this. Of the 4.5 billion people worldwide with access to the Internet, 2.5 billion are Facebook users. This society is very real, but it is also easy to exploit, influence, or even manipulate.
How to maintain order in this online society has become a top headache for Facebook and governments. Governments often ask Facebook to block or remove “illegal content”, which, according to the latter’s Transparency Report, has been requested 18,000 times worldwide in the first half of this year alone.
The final right of interpretation of the truth
Facebook has refused to “delete posts.”
Facebook has set its own standards to clean up pornography, violence, hate, and inflammatory content efficiently. But Facebook has been trying to remain neutral and not proactive in managing narrative content and opinions.
Facebook will also investigate some of the content that has caused controversy. Instead of doing it yourself, we hire independent third-party people, usually professional journalists, to verify the authenticity of the content. If an article is identified as “fake news” by censors, Facebook will not delete the article, but will reduce the push weight of the content and reduce the frequency of display on the user’s timeline. In addition, Facebook has devised a mechanism to alert users before they click on ‘fake news’ that it is recognized by professionals as ‘fake news’ and requires a second confirmation to continue reading.
That’s why Facebook’s attitude to Singapore’s laws is quite negative. Facebook does not agree with Singapore’s law requiring the government, not third-party agencies, to verify the authenticity of content. In a statement at the bottom of the article, it made it clear that the government believed the content contained false information and that it would only be shown to users in Singapore. Facebook has also set up a relevant information page where users can read the “Learn More button” next to the fake news statement: “Facebook does not agree with the authority of the Singapore government to define fake news.” Facebook is a neutral platform. ”
Facebook said it hoped the government would ensure that the law was implemented ‘in a planned and transparent manner’ and not used to suppress normal freedom of expression. A government spokesman also said it was in communication with Facebook to ensure the process went smoothly in similar incidents in the future.
At the same time, officials have come out to clarify that the scope of the fake news law is factual statements and not ‘opinions’. That is, the law will not be used to suppress normal criticism, nor will it be used to suppress political opponents. The government will only judge the ‘authenticity’ of the content and whether it is in the public interest.
The complexity of the matter speaks for itself. The narrative of fact often contains opinions, and the truth is not only “true” and “false”, people are faced with more exaggerated, distorted facts, and “narratives from a particular perspective”. It is precisely because of the complexity of truth that Facebook dares not define it easily, let alone has the ultimate right of interpretation of the truth.
Singapore’s hasty push for legislation to crack down on fake news is in the hope that the General Election in April 2021 will be as unpers as possible by fake news. As people increasingly receive information from the Internet, how do you ensure that people are not wrongly incited and misled to elect a wrong leader?
It’s a serious question, but no one can be the perfect judge in the face of complex realities.