Transfin.
HomeNewsGuidesReadsPodcastsTRANSFIN. EOD
  1. Reads
  2. Deep Dives

What is Content Moderation on Social Media Platforms?

Editor, TRANSFIN.
Oct 23, 2020 6:25 AM 4 min read
Editorial

Last year, IT services major Cognizant made a strange decision. Despite posting the slowest growth rate in its history, the company said it would drop a lucrative $250m work contract with Facebook and shed 11,000 jobs.

The work in question involved content moderation - one of the most undesirable, gruelling, traumatic...and indispensable professions of the social media age.

Who are content moderators?

Content moderators are the individuals who spend hours each day reviewing the millions of posts flagged by users as “inappropriate”. Such objectionable content includes hate speech, graphic violence, child pornography, torture, nudity, suicide, solicitation, drug abuse, terrorism, bestiality or bullying.

Basically, whenever you come across an image, video or post online that you think is improper or dangerous, you can report it to the concerned platform. What happens after this is usually some sort of AI screening followed by a review by human moderators. These individuals comb through reported content and decide whether it can stay online or should be taken down.

Content moderators are employed across the world and across linguistic demographics. In the US, Facebook alone hires 15,000 of them. Globally, there are over a hundred thousand of them.

And considering the enormous power and reach of social media platforms, they are increasingly on the frontlines of the fight for a cleaner internet.

 

The dark underworld of content moderation

Sifting through the worst of humanity for hours every day takes a brutal toll on one’s peace of mind.

The many instances of moderators quitting their jobs abruptly, but taking the ensuing psychological trauma with them often translates into drug abuse, depression and isolation.

Since the onset of the coronavirus pandemic, such work has become all the more traumatising for moderators, given the lack of a workplace support structure - or even company at home in many cases.

Moreover, most of these jobs are low-paying and entry-level. The latter is particularly problematic because when youngsters are exposed to such gruesome content, the effects can be even more damning.

Then there’s the nature of these undertakings. Social media platforms don’t usually hire their own content moderators. Instead, this responsibility is shifted onto third-parties (IT services firms like Cognizant, Accenture, Genpact etc.) who are required to hire temporary workers under precarious contracts.

Under such conditions, besides the health and well-being of the workers, the probability for error is high. Facebook CEO Mark Zuckerberg himself admitted that 10% of reviewed posts on the platform end up being incorrectly labelled. Sometimes, this is amusing - like YouTube banning educational videos on Nazi Germany for being “hate speech”. But other times, this is dangerous. Such as when Myanmar’s military used Facebook to incite genocide against the Rohingya minority in 2016 and 2017.

 

The content moderators’ rebellion

In 2018, over 10,000 former and current content moderators sued Facebook. Many of these workers suffered psychological problems due to their work, including PTSD. Earlier this year, Facebook agreed to settle the case for $52m - and consented to require its third-party contractors to implement some changes so as to reform this line of work.

These changes include:

  1. Screen job applicants for “emotional resilience” before onboarding them
  2. Post information about psychological help and counselling at each workstation
  3. Install a system for moderators to be able to report any violation of Facebook's workplace standards by vendors they work for

But IT services companies are reluctant to proceed with such content moderation contracts (which is why Cognizant dropped the deal with Facebook last year). Not only because this profession is an invariably scarring one, but also because of the changes Facebook will now be required to mandate. (Other tech companies like Google and Twitter may replicate these changes themselves to avoid any future liabilities.)

Why?

  • Because defining and checking for “emotional resilience” is a challenging task. Especially so since this profession is one with a high turnover rate. And even if only seasoned and “emotionally strong” people were hired, they would not be immune to the effects of reading or watching extreme things like gun shootings, child abuse or rape. 
  • And point #3 above practically requires workers to turn whistleblowers, which no corporate entity would be excited about. 

 

The burden of the third world?

Following the class-action lawsuit in the US, industry observers opine that social media companies could increasingly shift content moderation jobs to other countries like India and the Philippines.

Such outsourcing makes sense for company execs - they are less exposed to legal action in developing countries. (In India, for example, mental health is not considered an occupational hazard.) But needless to say, the mental scars inflicted on those who work in this profession cut deep - regardless of nationality. 

 

What’s the way forward?

Content moderation is often called “the worst job in technology”, and the main problem with the way tech firms deal with it is probably the way they view it as a peripheral profession that can be outsourced to other firms and can be executed by untrained freshers.

On the contrary, considering the nature of the work, content moderation ought to be viewed as one of the central occupations in the tech world.

According to New York University’s Stern Center for Business and Human Rights, social media companies like Facebook need to bring content moderators in-house, make them full employees and double their numbers. Other suggestions include:

  • Hire a specialist to oversee content moderation (someone high-level, who reports directly to the CEO or COO)
  • Provide all moderators with top-quality, on-site medical care, including access to psychiatrists
  • Expand moderation facilities in places like Asia and Africa
  • Sponsor research into the health risks of content moderation, in particular PTSD

Besides these steps, these companies would be placed well to invest heavily on improving their AI systems’ content moderation prowess. (Incidentally, content moderation is probably one profession that should be lost to automation!)

Social media companies began their journey on a mission to enable the free-flow of information. This may have been well-intentioned, utopian or naive. But to secure safety and security on the internet, social media companies will also have to invest amply to stop the free-flow of misinformation.

FIN.

There's so much happening around that simply keeping up can be a task! How about fun Quizzes to help you retain? Subscribe to our Quiz Knock Newsletter to receive them straight to your inbox.