A third report, as late as August 2020, admitted that the platform’s AI (artificial intelligence) tools were unable to “identify vernacular languages” and had, therefore, failed to identify hate speech or problematic content.
FROM a “constant barrage of polarising nationalistic content”, to “fake or inauthentic” messaging, from “misinformation” to content “denigrating” minority communities, several red flags concerning its operations in India were raised internally in Facebook between 2018 and 2020.
However, despite these explicit alerts by staff mandated to undertake oversight functions, an internal review meeting in 2019 with Chris Cox, then Vice President, Facebook, found “comparatively low prevalence of problem content (hate speech, etc)” on the platform.
Two reports flagging hate speech and “problem content” were presented in January-February 2019, months before the Lok Sabha elections.
A third report, as late as August 2020, admitted that the platform’s AI (artificial intelligence) tools were unable to “identify vernacular languages” and had, therefore, failed to identify hate speech or problematic content.
These glaring gaps in response are revealed in documents that are part of the disclosures made to the United States Securities and Exchange Commission (SEC) and provided to the US Congress in redacted form by the legal counsel of former Facebook employees and whistleblower Frances Haugen.
The redacted versions received by the US Congress have been reviewed by a consortium of global news organizations including The Indian Express.
Facebook did not respond to queries from The Indian Express on Cox’s meeting and these internal memos.
The review meetings with Cox took place a month before Facebook the Election Commission of India announced the seven-phase schedule for Lok Sabha elections on April 11, 2019.
The meetings with Cox, who quit the company in March that year only to return in June 2020 as the Chief Product Officer, did, however, point out that the “big problems in sub-regions may be lost at the country level”.
The first report “Adversarial Harmful Networks: India Case Study” noted that as high as 40 per cent of sampled top VPV (viewport views) postings in West Bengal were either fake or inauthentic.
VPV or viewport views is a Facebook metric to measure how often the content is actually viewed by users.
The second – an internal report – authored by an employee in February 2019, is based on the findings of a test account. A test account is a dummy user with no friends created by a Facebook employee to better understand the impact of various features of the platform.
This report notes that in just three weeks, the test user’s news feed had “become a near constant barrage of polarizing nationalistic content, misinformation, and violence and gore”.
The test user followed only the content recommended by the platform’s algorithm. This account was created on February 4, it did not ‘add’ any friends, and its news feed was “pretty empty”.
According to the report, the ‘Watch’ and “Live” tabs are pretty much the only surfaces that have content when a user isn’t connected with friends.
“The quality of this content is… not ideal,” the report by the employee said, adding that the algorithm often suggested “a bunch of softcore porn” to the user.
Over the next two weeks, and especially following the February 14 Pulwama terror attack, the algorithm started suggesting groups and pages which centered mostly around politics and military content. The test user said he/she had “seen more images of dead people in the past 3 weeks than I have seen in my entire life total”.
Facebook had in October told The Indian Express it had invested significantly in technology to find hate speech in various languages, including Hindi and Bengali.
“As a result, we’ve reduced the amount of hate speech that people see by half this year. Today, it’s down to 0.05 percent. Hate speech against marginalized groups, including Muslims, is on the rise globally. So we are improving enforcement and are committed to updating our policies as hate speech evolves online,” a Facebook spokesperson had said.
However, the issue of Facebook’s algorithm and proprietary AI tools being unable to flag hate speech and problematic content surfaced in August 2020, when employees questioned the company’s “investment and plans for India” to prevent hate speech content.
“From the call earlier today, it seems AI (artificial intelligence) is not able to identify vernacular languages and hence I wonder how and when we plan to tackle the same in our country? It is amply clear what we have at present, is not enough,” another internal memo said.
The memos are a part of a discussion between Facebook employees and senior executives. The employees questioned how Facebook did not have “even basic key work detection set up to catch” potential hate speech.