Facebook, Inc. on Wednesday announced a ban on praise, support and representation of white nationalism and white separatism on Facebook and Instagram, which the media giant said it will start enforcing next week.
“It’s clear that these concepts are deeply linked to organised hate groups and have no place on our services,” said Facebook in a statement which comes less than two weeks after the suspect in the terror attack at two New Zealand mosques streamed the massacre live on Facebook and allegedly spread a manifesto which outlined his white nationalist views.
It is evident that Facebook and other social media channels have long played a part in shaping political landscapes.
Over the past few months, Facebook has also taken down pages and accounts in the Philippines, Myanmar and Indonesia for everything from reporting fake news to posing as independent news outlets while actually being linked to the military.
After an investigation by Facebook in 2017, it was found that Russian operatives spent US$100,000 on ads with “divisive messages” during the election campaign season in the United States (US). These “divisive messages” are thought to be fake news used to influence the American electorate. According to analysis done by Buzzfeed, leading up to the election, the top 20 fake news stories had over one million more shares and comments on Facebook than the top 20 hard news stories.
How much influence this fake news had on the American electorate is hard to measure, but many observers believe that among other factors, it did play a significant role in influencing voters.
Then in March last year, it was revealed that Donald Trump had hired Cambridge Analytica as political consultants to work on his election campaign. Investigations by the Guardian and New York Times showed that Cambridge Analytica may have had access to over 70 million Facebook users’ accounts in the US. This data allowed Cambridge Analytica to combine social psychology with data analytics which allowed them to micro-target users with specific content.
Hiring political consultants to help out with elections is not something new. Fake news and political advertising have been around for decades, and some have even argued that fake news goes back as far as the Vietnam War. Previously, a political advertisement would have a ‘one size fits all’ model where everyone saw the same thing. Today, everything is bespoke and content is created specifically based on the psychological data firms have built on you. This is where ethical lines can be blurred.
The insidious use of social media for political ends has also seeped into Southeast Asia. A study by the University of Oxford found that US$200,000 was spent on hiring paid trolls to spread propaganda for President Rodrigo Duterte. The study highlighted that Duterte had a team of over 400 cyber troopers to post pro-Duterte comments. There have also been reports of people who are critical of Duterte or the government being targeted and harassed by these same cyber troopers.
In Myanmar, Facebook has been used for more violent results. In 2017, a Facebook post calling for violence against the Rohingya was shared over 13,000 times and had over 2,000 comments. This is among one of the many examples of hate speech being shared on Facebook. Last year, a group of six Myanmar civil society organisations posted an open letter to Mark Zuckerberg, criticising the inadequate response by Facebook on the reports of hate speech on the platform.
While Facebook has a policy of banning hate speech, many of the messages inciting racial violence remain online undetected by Facebook’s moderators, revealing a fault in their moderation system. Mark Zuckerberg personally responded to the open letter, apologising and mentioned that he is working on AI technology to identify abuses. When questioned about Facebook’s role in Myanmar during his US Congress hearing, Mark Zuckerberg pledged that he would implement a 24-hour target to block hate speech.
Governments in the region need to figure out how to tackle fake news and hate speech themselves without clamping down on freedom of speech and putting the blame on Facebook for facilitating it. Governments have begun implementing harsh laws against free speech under the guise of clamping down on fake news. For example, the operating license of Philippines’ website Rappler was revoked last January after Duterte called it a “fake news outlet” – the latest in a long string of accusations and arrests surrounding the company and its co-founder. Over in Malaysia, the government there passed an anti-fake news law that activists criticised as being too vague in its definition of fake news.
It is obvious that there are other actors involved when it comes to spreading fake news or manipulating data, and Facebook doesn’t bear the sole responsibility for it. However, Facebook needs to be aware that it provides a platform for millions and that information on it can spread like wildfire.
This article was first published by The ASEAN Post on 13 April 2018 and has been updated to reflect the latest data.