Facebook moderators press for pandemic safety protections

Remigio Civitarese
Novembre 21, 2020

Facebook has also removed over 22 million pieces of content for hate speech during the third quarter of 2020. Today, Facebook proactively detects about 95% of hate speech content that is being removed. According to Facebook, the hate speech prevalence in Q3 2020 was 0.10% - 0.11%, which would be about 1 view for every 1000 views of Facebook content. The majority of languages on Facebook don't have translated information about reporting any harmful content, let alone hate speech, which has led to the company being accused of treating certain markets purely as business opportunities.

Facebook has revealed its first-ever report on the prevalence of hate speech on its platform.

Although the company points to its community standards enforcement report to show their AI improving, in the open letter, employees said their return to the office showed "the AI wasn't up to the job".

The update comes just days after Facebook CEO Mark Zuckerberg spoke to Congress about internet regulation, during which he repeatedly pointed out the company's reliance on algorithms to spot terrorist and child-exploitation content before anyone sees it.

And while strides have been made in proactive detection of hate speech, the platform still has a lot of work to do.

Facebook's photo-sharing site Instagram took action on 6.5 million pieces of hate speech content, up from 3.2 million in Q2.

"This is really sensitive content". They argue they're "the heart" of Facebook. "It is time that you acknowledged this and valued our work". "The idea of moving to an online detection system optimized to detect content in real time is a pretty big deal", he said.

On a call with reporters, Facebook's head of safety and integrity Guy Rosen said the audit would be completed "over the course of 2021". Memes are typically clever or amusing combinations of text and imagery, and only in the combination of the two is the toxic message revealed, he said. "They will be rolled back the same as they were rolled out, which is very carefully", he said. For example, the company banned political ads in the week before and after the election, for example, and recently announced that it would continue the ban on those ads until further notice. "Whether content is proactively detected or reported by users, we often use AI to take action on the straightforward cases and prioritize the more nuanced cases, where context needs to be considered, for our reviewers", Facebook said. The company defines hate speech as anything that directly attacks people based on protected characteristics, including race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity or serious disability or disease. Through this letter, the content moderators have tried to express their disappointment in Facebook's demands for them to work on tough deadlines, least support, and low pay scale, and then forcing these outsourced moderators to come down to the offices even when there is a pandemic going on.

Now, on top of work that is psychologically toxic, holding onto the job means walking into a hot zone. "Yet we are so integral to Facebook's viability that we must risk our lives to come into work".

Facebook has long had issues in dealing with misinformation on a range of topics.

The Community Standards Enforcement Report is published in conjunction with Facebook's biannual Transparency Report.

Altre relazioniGrafFiotech

Discuti questo articolo

Segui i nostri GIORNALE