Nudity, hate speech and spam: Facebook reveals how much content it kills

Adjust Comment Print

"We use technology, combined with people on our teams, to detect and act on as much violating content as possible before users see and report it". In the first quarter, Facebook disabled about 583 million fake accounts and removed 837 million pieces of spam, the report said.

Facebook's vice president of product management, Guy Rosen, said that the company's systems are still in development for some of the content checks.

Facebook said it released the report to start a dialog about harmful content on the platform, and how it enforces community standards to combat it.

The release of the report-the first time the company has ever made such data public-comes on the heels of a series of other first-ever efforts at transparency following the Cambridge Analytica scandal, Facebook's subsequent apologies, and Mark Zuckerberg's many hours of testimony on Capitol Hill. "It's partly that technology like artificial intelligence, while promising, is still years away from being effective for most bad content because context is so important". The company estimates that between 0.22 percent and 0.27 percent of content violated Facebook's standards for graphic violence in the first quarter of 2018.

"It may take a human to understand and accurately interpret nuances like. self-referential comments or sarcasm", the report said, noting that Facebook aims to "protect and respect both expression and personal safety".

"For graphic violence, we took down or applied warning labels to about 3.5 million pieces of violent content in Q1 2018 - 86% of which was identified by our technology before it was reported to Facebook", it said.

It attributed the increase to the enhanced use of photo detection technology.

For years, Facebook has relied on users to report offensive and threatening content.

Omo-Agege, Ndume invited over Senate mace theft
Justice Nnamdi Dimgba of the Abuja Division of the Federal High Court nullified the suspension of the senator Thursday. The Senate had initially stated it would not respect the order, since it has filed an appeal, in a higher court.

Adult nudity and sexual activity: Facebook says.07% to.09% of views contained such content in Q1, up from.06% to.08% in Q4.

Several categories of violating content outlined in Facebook's moderation guidelines - including child sexual exploitation imagery, revenge porn, credible violence, suicidal posts, bullying, harassment, privacy breaches and copyright infringement - are not included in the report.

Facebook acknowledged it has work to do when it comes to properly removing hate speech. But only 38 percent had been detected through Facebook's efforts - the rest flagged up by users.

Terrorist propaganda (ISIS, Al Qaeda, and affiliates): Facebook says it took action on 1.9 million pieces of such content, and found and flagged 99.5% of such content before anyone reported it.

Improved IT also helped Facebook take action against 1.9 million posts containing terrorist propaganda, a 73 percent increase. It says it found and flagged almost 100% of spam content in both Q1 and Q4.

The company estimated that around 3% to 4% of the active Facebook accounts on the site during this time period - roughly 43 million out 2.19 billion - were fake.

The social networking giant also said that it disabled 583m fake accounts in the first quarter of the year and now estimates that between 3pc and 4pc of all active accounts during the period were fake. And more generally, as I explained last week, technology needs large amounts of training data to recognise meaningful patterns of behavior, which we often lack in less widely used languages or for cases that are not often reported.

Comments