Global social media conglomerate Facebook took proactive action on 1.8 million pieces of content containing adult nudity and sexual activity, 2.5 million pieces on violent and graphic content, and about 25 million content pieces containing spam, the company said in its first monthly report mandated by the Intermediary Guidelines and Digital Media Ethics Code.
The monthly report, published on Friday, is for the period between May 15 and June 15. Facebook’s photo and small video sharing platform Instagram also took action on 4,90,000 pieces of content containing adult nudity or sexual activity, while it proactively removed 6,68,000 pieces that had violent and graphic content.
“We use a combination of Artificial Intelligence, reports from our community, and review by our teams to identify and review content against our policies. We will continue to add more information and build on these efforts towards transparency as we evolve this report,” a Facebook spokesperson said.
On July 15, the platform will publish another report which will contain details of user complaints received by Facebook, Instagram, and WhatsApp and the action taken by these three platforms on such complaints.
As per the intermediary guidelines announced in February and put into force from May 26, all significant social media intermediaries—those with more than 50 lakh users in India– had to publish monthly reports mentioning the details of complaints received, the details of such complaints, and the action taken on them, and the number of specific communication links that the platform removed through proactive monitoring.
“We expect to publish subsequent editions of the report with a lag of 30-45 days after the reporting period to allow sufficient time for data collection and validation. We will continue to bring more transparency to our work and include more information about our efforts in future reports,” Facebook said in its report.
While Facebook has reported a proactive detection rate of more than 95 per cent in most varieties of content such as adult nudity and sexual activity, organised hate, hate speech, drugs, firearms, suicide and self-injury, its proactive monitoring in case of bullying and harassment on the platform has remained low at nearly 37 per cent.
Similarly, at Instagram, while the proactive monitoring rate in case of content such as organised hate, terrorist propaganda, hate speech, suicide and self-injury remained above 80 per cent, that for bullying and harassment content remained low at 43 per cent.
“We are increasingly using technology to proactively detect violating content without anyone needing to report it. However, in the case of bullying and harassment, as this content is very contextual and highly personal by nature, in many instances, we need a person to report this behaviour to us before we can identify or remove it,” a Facebook spokesperson said.
Earlier this week, search engine giant Google had released its monthly report in compliance with the new IT Rules, in which it had said that between April 1 and April 30, the company had received 27,762 complaints of which 26,707 or 96 per cent of the complaints were related to copyright. The company acted on and removed 59,350 pieces of content based on complaints raised by users during the period, a majority of which again were related to copyright infringement.
Apart from Google and Facebook, homegrown micro-blogging site Koo had also released its monthly compliance report this week. In its report, the platform has said that of the 5,502 Koos reported by users, 1,253 or 27 per cent of contents were removed. The platform also said it proactively moderated 54,235 Koos and removed 1,996 such content pieces.