by Stefano Bocconetti
– “I’m going to kill you, I will behead you, I’m going to chop you up, you filthy Ogiek villager”
– “Warning: FaceBook rules prohibit the use of the adjective filthy directed at a person”
– “I will kill you, behead you, chop you into pieces, Ogiek villager.”
– “OK”
It sounds absurd, but more or less this happened on the eve of the elections in Kenya, set for the 9 August. What happens is that, in a country that entrusts much of the political debate to social media – nineteen million users, seventeen of which on Facebook making up almost 20 per cent of the population -, Zuckerberg’s giant pretends not to see what is happening on its pages. It does not seem interested in seeing what is happening. The complaint is not new, of course, already at the time of the tragic 2017 elections, there was talk of the negative role of social media in fomenting violence. However, on the eve of the election, that complaint has been detailed. One of the world’s most authoritative digital rights organisations, Foxglove in London, together with GlobalWitness, an association that studies and analyses crises all over the world, has ‘tested’ the phenomenon in the field in recent weeks.
And the report it has produced is dramatic. Because the team of scholars has not limited itself to sifting through the contents. It has done more: it has written about twenty small messages, in English and Swahili. Deliberately and overtly violent messages. Complete with incitements to kill, to silence ethnic minorities and small opposition parties, to ‘gag’ feminist movements. He wrote them and sent them to Facebook as ‘ads’, those paid posts.
Why this choice? First – for those who know how things work on the world’s most popular social networking site – because a paid post waiting to be published can be deleted. And obviously the team of researchers did not want the test posts to reach the public. The second reason concerns the choices of Meta, the group to which Facebook belongs. Which has always claimed to monitor paid messages more carefully, especially announcements on the eve of elections. Statements that turned out to be false.
Completely false. Because Foxglove and GlobalWitness discovered that their examples of hate speech had quietly been given the ‘green light’. Had it been up to Facebook, they would have been published as they were. Except in one case: when commonly used swear words had been added next to the violent phrases. In that case, the authors received a message urging them to remove the disputed words. And without the vulgarities, Facebook had accepted the text even though it contained a solicitation to murder. After the report was published, Facebook-Meta responded with an email claiming that it does the impossible to monitor discussions on social media, even though ‘some mistakes, humans and artificial intelligence’ can make them. However, it says, it will do even more, more stringently on the immediate eve of the elections.
Is it over? No, because Foxglove and GlobalWitness tried the test again, after Zuckerberg’s reassuring statements. With the same results. Another message of theirs, inciting violence, was authorised (also blocked by the authors in time). Kenya then. A story that follows by a few months, the other dramatic denunciation of the role of spectator taken on by FaceBook while massacres and deaths were rampant in Ethiopia, in Tigray. Revealing once again what digital rights organisations have been denouncing for years: that BigTech profits – in audience, then in contacts, then in user profiles, then in advertising – from all that is violence, hatred. From everything that is over the top.
And within all this, there is a further element that can be described as racist, colonialist. Because every time a case ‘breaks out’, such as Facebook’s ‘non-intervention’ during the 6 January assault on the White House, Zuckerberg is forced to intervene. In America, the new rules are now a bit stricter, the ‘moderators’, the people who check messages, have increased in number, and it has been decided to suspend paid advertisements on the eve of the elections. But what is true in the US is not true in Africa. Here, moderation is mostly outsourced to American giants that pay less than a dollar an hour to staff forced to stand in front of a screen ten, twelve hours a day.
And it was precisely in Kenya, thanks to the courage of a 27-year-old, that news came to light that one of Facebook’s biggest clients, Sama, uses a type of employment that could easily be qualified as slave labour. There is enough, in short, for Frances Haugen – yes, the American whistleblower who last year shook the FaceBook empire, denouncing how much wealth was brought to the group by the lack of controls – commenting on the Foxglove and GlobalWitness study, to say that it is terrible to realise ‘that security in Kenya is worth for them much less than in the United States’.
Cover image: Glen Carrie