in

Facebook removed 3.2 billion fake accounts and tens of millions of posts flagged as child abuse

[ad_1]

In context: Facebook has been under pressure for ‘not doing enough’ to moderate its platform, but the latest figures show the company has been improving its algorithms. Now it’s boasting that it proactively detects and removes more than 99 percent of all child abuse posts, and is aiming to soon do the same on Instagram.

The social giant has released its latest transparency report, which should give us an idea about the overall progress on moderating the platform. The previous one showed Facebook had removed 2.2 billion fake accounts during the first quarter of this year. Since then, it’s removed an additional 3.2 billion accounts as a result of improving its automated detection systems. The platform is known for its impressive scale, and Facebook estimates about 5 percent of its 2.45 billion monthly active user accounts are fake.

Interestingly, the company has integrated Instagram figures in the report for the first time. Content enforcement on the platform has four main areas of focus: child abuse, firearm and drug sales, suicide, and terrorist propaganda.

The company removed tens of millions of posts related to child nudity of sexual exploitation. Specifically, it deleted 11.6 million pieces of content from July to September this year, which is almost double the amount from the three months prior. Facebook says its algorithms are so good now that they detected 99 percent of the content it removed. Instagram removed an additional 754,000 pieces of content at a rate of detection of just under 95 percent.

As for drug-related posts, Facebook removed 4.4 million of them and 2.3 million that were linked to firearm sales. The algorithm was able to proactively flag 84.4 percent and 70 percent of the posts, respectively.

Another important area is hate speech, and the company says it has made significant efforts to detect it. The rate increased from 68 percent in the first quarter of 2019 to 80 percent in the second quarter, which is definitely good progress.

It’s worth noting the company did have a harder time detecting terrorism-related content on Instagram as opposed to Facebook. The company was able to proactively flag 98,5 percent of it on Facebook but only managed to do the same with 92 percent of the Instagram posts.

The company also provided a look at the actions it took on content that involved self-harm and suicide. More than 2.5 million posts were removed from Facebook in the third quarter alone, along with 835,000 from Instagram, where the algorithms also need some significant tuning to achieve over 90 percent detection rates.

The report also shows a 16 percent spike in requests for Facebook user data made by governments all around the world. Most of them were made by the governments in the US, India, the UK, Germany, and France. Of the 128,617 requests, 50,741 were made by the US government alone, and 88 percent of them were honored. More worrying is the fact that two thirds came with the specific order of not notifying the user.

Facebook has been under a lot of pressure as of late for not having a good policy with regards to political advertising on its platform. That said, the company is trying to do something about all the bots and fake accounts by testing a video selfie feature as an additional requirement for setting up an account.

Instagram has also seen more attention to make the platform a friendlier place. Recently, it began hiding Like counts in the US, removing the Following tab, and will soon shadow ban anyone that is flagged as a bully.

[ad_2]

Source link

Subcellular computations within brain during decision-making — ScienceDaily

Severe allergic reactions rise in children in England over past five years