Facebook is letting violent hate speech slip through its controls in Kenya as it has in other countries, according to a new report from the nonprofit groups Global Witness and Foxglove.
It is the third such test of Facebook's ability to detect hateful language — either via artificial intelligence or human moderators — that the groups have run, and that the company has failed.
The ads, which the groups submitted both in English and in Swahili, spoke of beheadings, rape and bloodshed. They compared people to donkeys and goats. Some also included profanity and grammatical errors. The Swahili language ads easily made it through Facebook's detection systems and were approved for publication.
As for the English ads, some were rejected at first, but only because they contained profanities and mistakes in addition to hate speech. Once the profanities were removed and grammar errors fixed, however, the ads — still calling for killings and containing obvious hate speech — went through without a hitch.
"We were surprised to see that our ads had for the first time been flagged, but they hadn't been flagged for the much more important reasons that we expected them to be," said Nienke Palstra, senior campaigner at London-based Global Witness.
The ads were never posted to Facebook. But the fact that they easily could have been shows that despite repeated assurances that it would do better, Facebook parent Meta still appears to regularly fail to detect hate speech and calls for violence on its platform.
Global Witness said it reached out to Meta after its ads were accepted for publication but did not receive a response. On Thursday, however, Global Witness said it did receive a response earlier in July but it was lost in a spam folder. Meta also confirmed Thursday it sent a response.
"We've taken extensive steps to help us catch hate speech and inflammatory content in Kenya, and we're intensifying these efforts ahead of the election. We have dedicated teams of Swahili speakers and proactive detection technology to help us remove harmful content quickly and at scale," Meta said in a statement. "Despite these efforts, we know that there will be examples of things we miss or we take down in error, as both machines and people make mistakes. That's why we have teams closely monitoring the situation and addressing these errors as quickly as possible."