Facebook is Relying on AI More Than Ever As Human Content Moderators Are at Home
Facebook is Relying on AI More Than Ever As Human Content Moderators Are at Home
The New Facebook Community Standards Enforcement Report also includes metrics across 12 policies on Facebook and metrics across 10 policies on Instagram.

Facebook has released the fifth edition of its Community Standards Enforcement Report, which now includes more metrics and talks about the proactive use of Artificial Intelligence (AI) along with human fact checkers to moderate content on its platform. The social network’s report talks about policies implemented and the data all the way through till March 2020, which means we get the data for a significant chunk of the COVID pandemic period as well. Facebook notes that during this time, they had to rely even more on the automated systems and prioritized high-severity content for teams to review, as they sent content reviewers home.

The New Facebook Community Standards Enforcement Report also includes metrics across 12 policies on Facebook and metrics across 10 policies on Instagram. The report also introduces Instagram data in four issue areas—that is Hate Speech, Adult Nudity and Sexual Activity, Violent and Graphic Content, and Bullying and Harassment. “On Facebook, we continued to expand our proactive detection technology for hate speech to more languages, and improved our existing detection systems. Our proactive detection rate for hate speech increased by more than 8 points over the past two quarters totaling almost a 20-point increase in just one year. As a result, we are able to find more content and can now detect almost 90% of the content we remove before anyone reports it to us,” says Guy Rosen, VP Integrity at Facebook. Instagram reaped the benefits of improvement to the text and image matching technology to identify more self-harm and suicide content.

On Facebook, hate speech remains a big problem. Between January and March, more than 9.6 million pieces of content were actioned upon, of which 88.8% was found before any user reported it. Sexual activity and nudity saw 39.5 million pieces of content actioned on, and Facebook says 99.2% of that content was detected before users flagged it. Bullying and Harassment posts that were identified and taken action on reduced in this quarter—2.3 million pieces of content, compared with 2.8 million pieces of content in the Oct-Dec 2019 quarter.

On Instagram, sexual content and nudity clocked 8.1 million posts which were taken action on, while bullying and harassment saw 1.5 million pieces of content, child nudity and exploitation increased to 1 million posts (compared with 685.5k pieces of content in the previous quarter) while the self-injury and harm related posts shot up to 1.3 million pieces of detected content compared with 896.8k in the previous quarter.

“This report includes data only through March 2020 so it does not reflect the full impact of the changes we made during the pandemic. We anticipate we’ll see the impact of those changes in our next report, and possibly beyond, and we will be transparent about them,” the social media giant says.

Facebook says the next Community Standards Enforcement Report will be released in August.

What's your reaction?

Comments

https://tupko.com/assets/images/user-avatar-s.jpg

0 comment

Write the first comment for this!