WhatsApp Group Join Now
Telegram Group Join Now
Instagram Group Join Now

Pre and post 2019 polls: Facebook memos flag anti-minority post surge.

A July 2020 report specifically noted there was a marked rise in such posts in the preceding 18 months, and that the sentiment was “likely to feature” in the coming Assembly elections, including West Bengal.

Written by Aashish AryanKarunjit SinghPranav Mukul | New Delhi |

Updated: November 11, 2021 4:31:03 am

Pre and post 2019 polls: Facebook memos flag anti-minority post surge.
Nearly all these reports place India in the ARC category, where the risk of societal violence from social media posts is more compared to other countries.

MULTIPLE INTERNAL Facebook reports over the last two years red-flagged an increase in “anti-minority” and “anti-Muslim” rhetoric as “a substantial component” of the 2019 Lok Sabha election campaign. A July 2020 report specifically noted there was a marked rise in such posts in the preceding 18 months, and that the sentiment was “likely to feature” in the coming Assembly elections, including West Bengal.

The increase in hate speech and inflammatory content was mostly centred around “themes” of threats of violence, Covid-related misinformation involving minority groups, and “false” reports of Muslims engaging in communal violence.

These reports are part of documents disclosed to the United States Securities and Exchange Commission (SEC) and provided to the US Congress in redacted form by the legal counsel of ex-Facebook employee and whistleblower Frances Haugen.

The redacted versions received by the US Congress have been reviewed by a consortium of global news organisations including The Indian Express.

In one internal report in 2021 before the Assembly elections in Assam, Himanta Biswa Sarma, now Assam Chief Minister, was flagged in as being party to trafficking inflammatory rumours about “Muslims pursuing biological attacks against Assamese people by using chemical fertilizers to produce liver, kidney and heart disease in Assamese.”

When asked by The Indian Express about this and whether he knew of his “fans and supporters” indulging in hate-speech, Sarma said he was “not aware of that development”. Asked if he was contacted by Facebook flagging the content posted on his page, Sarma said: “I had not received any communication.”

Another internal Facebook report, titled “Communal Conflict in India”, notes that inflammatory content in English, Bengali, and Hindi spiked numerous times and especially in December 2019 and March 2020, coinciding with the protests against the Citizenship Amendment Act and start of lockdowns enforced to prevent the spread of Covid-19.

Despite presence of such content on the platform, the documents reveal, there was a palpable clash between two internal Facebook teams: those flagging problematic content and those designing algorithms to push content on the newsfeed.

To tackle such problematic content, an internal staff group had, in the July 2020 report, suggested various measures such as developing “inflammatory classifiers” to detect and enforce such content in India, improving the platform’s image text modelling tools so that such content could be identified more effectively, and building “country specific banks for inflammatory content and harmful misinformation relevant to At Risk Countries (ARCs)”.

Nearly all these reports place India in the ARC category, where the risk of societal violence from social media posts is more compared to other countries.

According to another 2021 Facebook internal report, titled “India Harmful Networks”, groups claiming to be affiliated with the Trinamool Congress engaged in coordinated posting of instructions via large messenger groups and then posted these messages across multiple similar groups in an attempt to boost the audience for content which was “often inflammatory,” but “usually non-violating”.

The posts from RSS and BJP affiliated groups, on the other hand, carried a high volume of “love jihad” content with hashtags “linked to publicly-visible Islamophobic content”, the internal report noted.

Queries sent to the BJP, RSS, and TMC went unanswered.

Despite all these red flags, another group of staffers at the social media firm suggested only a “stronger time-bound demotion” of such content.

Asked if the social media platform took any measures to implement these recommendations, a spokesperson for Meta Platforms Inc — Facebook was rebranded as Meta on October 28 — told The Indian Express: “Our teams were closely tracking the many possible risks associated with the elections in Assam this year, and we proactively put in place a number of emergency measures to reduce the virality of inflammatory comments, particularly videos. Videos featuring inflammatory content were identified as high risk during the election, and we implemented a measure to help prevent these videos from automatically playing in someone’s video feed”.

“On top of our standard practice of removing accounts that repeatedly violate our Community Standards, we also temporarily reduced the distribution of content from accounts that had repeatedly violated our policies,” the spokesperson said.

On rising hate content, the Meta spokesperson said hate speech against marginalised groups, including Muslims, was on the rise globally.

“We have invested significantly in technology to find hate speech in various languages, including Hindi and Bengali. As a result, we have reduced the amount of hate speech that people see by half this year. Today, it’s down to 0.03 per cent,” the spokesperson added.

Not only was Facebook made aware about the nature of content being posted on its platform, but it also discovered, through another study, the impact of posts shared by politicians.

In one internal document titled “Effects of Politician Shared Misinformation”, examples from India figured as “high-risk misinformation” shared by politicians which led to a “societal impact” of “out-of-context video stirring up anti-Pakistan and anti-Muslim sentiment”.

The study pointed out that users thought it was “Facebook’s responsibility to inform them when their leaders share false information”. There was also debate within the company, according to the documents, on what should be done when politicians shared previously debunked content.

 

...