Facebook on Tuesday removed 7 million posts in the second quarter for sharing false information about the novel coronavirus, including content that promoted fake preventative measures.
Facebook released the data as part of its sixth Community Standards Enforcement Report, which it introduced in 2018 along with more stringent decorum rules in response to a backlash over its lax approach to policing content on its platforms.
The social media giant said it would invite external experts to independently audit the metrics used in the report, beginning in 2021.
The world’s biggest social media company removed about 22.5 million posts containing hate speech on its flagship app in the second quarter, up from 9.6 million in the first quarter.
It also deleted 8.7 million posts connected to extremist organizations, compared with 6.3 million in the prior period.
Facebook said it relied more heavily on automation technology for reviewing content during the months of April, May and June.
That resulted in the company taking action on fewer pieces of content related to suicide and self-injury, child nudity and sexual exploitation on its platforms, Facebook said in a post.
The company said it was expanding its hate speech policy to include content depicting blackface, or stereotypes about Jewish people controlling the world.