Transfin.
HomeNewsGuidesReadsPodcastsVideosTech
  1. Reads
  2. Deep Dives

Fake News, Hate Speech and Misinformation: Inside Facebook's Challenges in India

Editor, TRANSFIN.
Oct 25, 2021 1:42 PM 6 min read
Editorial

Facebook’s relationship with India is a complicated one, to put it mildly.

On one hand, the country is the social media giant’s largest market with 340 million users on its namesake platform, 400 million on WhatsApp, and 180 million on Instagram. Facebook has invested billions to expand its India operations and has big investments (Jio, Meesho, Unacademy etc.) in our rapidly expanding digital economy. The firm’s top India executive reports directly to Mark Zuckerberg, underscoring the significance of this region in the grand scheme of things.

But on the other hand, India is probably the best (or should we say “worst”?) example of all things wrong with Facebook - and, on a broader level, social media in general. Indian regulators’ distrust of Big Tech is well-documented. WhatsApp Pay was mired in regulatory limbo for months before it was finally rolled out. In March, the Union Government threatened to jail Facebook’s employees over the network’s reluctance to comply with its data takedown requests. And three months later, WhatsApp sued the Indian Government over its new social media regulations.

The biggest bone of contention, however, revolves around content. Specifically, the vicious kind. And what Facebook is - and isn’t - doing about the same.

October Revelations

Over the last few days, numerous reports in mainstream media outlets have cited internal Facebook documents to publish pieces on something everyone already knew: Facebook’s products in India are awash with hateful content.

What’s new, however, is knowledge about Facebook researchers quantifying the scale of this problem - and how the company has been knowingly selective in enforcing its hate speech policies.

According to The Wall Street Journal, a July 2020 internal Facebook research report showed that inflammatory content on Facebook spiked 300% above previous levels in the months following December 2019, during the protests against the Citizenship Amendment Act. Fake news and calls for violence particularly peaked in February last year during the Delhi riots, which saw 53 killed and hundreds injured.

The report went into details on how much of the rumour-mongering happens on WhatsApp and is incubated over private Facebook groups. Much of it is targeted against religious minorities, with copious amounts of content dehumanising Muslims, blaming them for COVID-19 or alleging things like “Hindus are in danger” and “Muslims are about to kill us”. And many of the accounts propagating such hatred are tied to the top political brass of the country.

AP, meanwhile, studied company memos dating back to 2019, which showed that (1) Facebook knew for years that much of the bile on its platform was the result of its own "recommended" feature and algorithms, (2) company staffers routinely voiced concerns over the mishandling of these issues, and (3) the company was selective about enforcing its policies, out of fear of political backlash.

And finally, another string of news reports showed how Facebook’s problem is Facebook itself. The very way the platform - and, indeed, social media in general - is designed fosters echo chambers and rewards virality over quality and controversy over nuance. In February 2019, the company created a dummy account in Kerala to test the effects of continued Facebook exposure. The results - revealed by Bloomberg - showed that after only three weeks of following the platform's algorithm-based recommendations, the account's feed was filled with fake news, doctored images and calls for violence.

“Following this test user’s News Feed, I have seen more images of dead people in the past three weeks than I have seen in my entire life total,” the Facebook researcher who created the account reportedly wrote in that internal report.

Now, Facebook claims that this exercise “inspired deeper, more rigorous analysis of our recommendation systems” and “contributed to product changes”. However, considering the deluge of hate that flooded online last year, the company is clearly not inspired enough.

 

Selectively Proactive

As in many other countries, Facebook has become entangled in the politics of India as well. Naturally, it has gotten messy.

On the one hand, the ruling BJP accuses it of not complying with takedown requests for data it deems “objectionable” (an allegation that peaked amid the farmer protests last year, and one that particularly bruised GoI’s relations with Twitter) and not shelling out information regarding user data when asked (India ominously leads the world in this).

But on the other hand, Opposition political parties accuse the social media giant of turning a blind eye to hateful content posted by users seen as being in bed with the BJP. For instance, an internal company report by Facebook itself concluded that the Bajrang Dal, a fringe group, used WhatsApp to “organise and incite violence” and also that “much of the content posted by users, groups and pages from the...RSS is never flagged”.

Why does Facebook dither? Two possible reasons.

One: hate, as it happens, is seen as good business for Big Tech. Not only is controversial content perfect fodder to keep users’ interest piqued, but any action against bad players with political backing may hurt the company’s future plans.

Take the case of Raja Singh, a BJP MLA from Telangana, who in 2020 called for Muslim immigrants to be shot and mosques to be razed. Facebook's content moderators concluded that he broke the platform's rules and qualified as "dangerous", which should have meant an immediate ban. Instead, Facebook wavered. The company's then top India public-policy executive, Ankhi Das, told employees that "punishing violations by politicians from Mr. Modi’s party would damage the company’s business prospects in the country".

Now, Singh was finally banned, but only after weeks of vacillation, and only after Facebook’s reluctance leaked to the press and sparked a PR firestorm.

Two: content moderation is not a simple task. In essence it is akin to the proverbial Hydra. But in India, the many-headed serpent also speaks dozens of languages in hundreds of dialects. Much of the objectionable content on Facebook’s platforms flies off-radar simply because it lacks sufficient technical systems for detecting material in Indian languages. It has promised to amp up its content moderation infrastructure in India, but considering how even its English language moderation apparatus is woefully underdeveloped, it’s safe to assume that we may have to wait longer for effective Bengali or Kannada content monitors.

 

The Road Ahead

A Mumbaikar who was surveyed by Facebook for its July 2020 report told researchers, “If social media survives 10 more years like this, there will be only hatred... [And India will be a] very difficult place to survive for everyone.”

Since Facebook platforms are now an intricate and inseparable part of our lives, quitting them is not really an option. What’s the way out, then?

There are possible solutions. (But each comes with caveats.) Some point to the recent social media regulations as a step in the right direction. (But why would you think an all-powerful government would be more well-intentioned about content monitoring than a private company?) Some say breaking Big Tech is the only way forward. (But the FTC’s attempts at divorcing Facebook from WhatsApp and Instagram have hit a dead-end.) There are also broader solutions - of reimagining digital literacy and educating users about how to filter out fake news on their own. (But this is a long-term solution, and one that will become more impractical in an era of deepfakes.)

What do Facebook researchers say? One solution proposed in internal company reports was to have “Facebook invest more in resources to build out underlying technical systems that are supposed to detect and enforce on inflammatory content in India the way human reviewers might”. Another pitch involves creating a “bank” of inflammatory material “to study what people were posting, and creating a reporting system within Facebook’s WhatsApp messaging app that would allow users to flag specific offending messages and categorise them by their contents”.

To clean up its act in India, Facebook has a long way to go. Expanding and diversifying its content moderation team - whilst continuing to perfect its AI apparatus to identify hate speech - is a good start. Fixing its default settings and hate-attracting algorithms would go a long way in making social media less divisive. And ensuring no selective treatment in the imposition of its rules would ensure fair play and elicit trust.

Unless Facebook owns up to its mistakes and commits to applying the relevant fixes, the aforementioned Mumbaikar’s fears will likely be realised.

By then, Facebook is likely to have a new name, apparently. “Red Flag” seems like a fitting new moniker.

FIN.
 

The cut-throat world of Business and Finance means that there is fresh News everyday. But don't worry, we got you. Subscribe to TRANSFIN. E-O-D and get commentaries like the one above straight to your inbox.