Political disinformation and hate speech is going viral on TikTok in Kenya ahead of a critical national election, according to a new report from the Mozilla Foundation.
The report researcher, Odanga Madung, analyzed 130 videos from 33 accounts that collectively have over 4 million views. Madung found that targeted incitements to violence against specific ethnic groups, and manipulated or outright fake content were “present and spreading” on TikTok.
Some, but not all of the problematic videos were removed from the platform on Tuesday following the company’s review of the report. However, most should have never been allowed to exist and proliferate in the first place.
TikTok’s policies in theory ban this type disinformation and targeted attacks, but the company’s moderation strategy isn’t working, according to Mozilla. Madung interviewed former content moderators from the social media site and found “a moderation ecosystem that lacks both the context and resources to adequately engage with election disinformation in Kenya.” Still, a spokesperson for TikTok told Gizmodo that they’re committed to protecting the platform’s integrity and that they have a team working to safeguard TikTok during the Kenyan elections.
“Rather than learn from the mistakes of more established platforms like Facebook and Twitter, TikTok is following in their footsteps, hosting and spreading political disinformation ahead of a delicate African election,” said the report. Madung compared the political content on the platform to the previous work of Cambridge Analytica on Facebook surrounding the country’s 2017 elections.
On August 9, Kenya is set to hold votes for president, national assembly members, senate, governors, and county representatives. It’s a big event for the country, where elections have previously been contested and tinged with violence.
In 2007, an election catapulted the country into crisis and national catastrophe. At least 1,300 people were killed in ensuing ethnic violence. Before the last major vote in 2017, an electoral commission official was murdered. Then, the losing candidate refused to concede, claiming electoral fraud. Dozens of people were killed by police in protests. Although the conditions of democracy in Kenya have been seemingly improving, the report points out that fears about electoral violence remain heavily present.
Obviously, the nation is hoping to avoid chaos and bloodshed this time around, but TikTok is sowing challenges.
The platform is the most downloaded app in Kenya, according to Similarweb and is ranked the highest grossing app in the country by App Figures. Videos under hashtags related to politics on TikTok have over 20 million views on the platform, and the most popular of those clips have nearly 1 million views individually. Conversely, on Instagram, the most popular political videos have only been watched a few hundred times, according to Mozilla.
From the report:
A highly sophisticated disinformation campaign is underway on the platform, which includes slickly produced video content and attack ads spewing false claims about candidates, while also threatening various ethnic communities. Many of the videos are getting outsized viewership in comparison to their followership — and according to researchers, this suggests that the content may be gaining amplification from TikTok’s For You Page algorithm.
TikTok’s policies specifically bar hate speech and the company claims it doesn’t allow posts “promoting or supporting any hateful ideology.” Yet Madung’s analysis found lots of videos containing “explicit threats of ethnic violence.” The report speculated that TikTok lacks the cultural context to appropriately review and remove this type of content.
“It’s common to find moderators being asked to moderate videos that were in languages and contexts that were different from what they understood. For example I at one time had to moderate videos that were in Hebrew despite me not knowing the language or the context,” former TikTok moderator, Gadear Ayed, told the report authors.
Madung also found that many videos containing violent imagery (including from the events following the 2007 election) garnered particularly high views on the platform. The report pointed out that this proliferation and popularity of graphic content seems to go against TikTok’s stated policy against recommending graphic content.
Finally, the researcher noted that manipulated content, like heavily cut videos overlaid with false narration and collages containing fake tweets and headlines were common on TikTok. Once again, in contrast to the company’s policies.
Speed requirements, quotas, and a lack of strategy for detecting manipulated content also likely contribute to the platform’s moderation issues, said Mozilla.
TikTok is defending the way it moderates content. A spokesperson told us, “We prohibit and remove election misinformation, promotions of violence, and other violations of our policies and partner with accredited fact-checkers, including Agence France-Presse (AFP) in Kenya. We’re also engaging locally with NGOs and will roll out product features to connect our community with authoritative information about the Kenyan elections in our app.”
What’s happening on TikTok in Kenya is also likely happening elsewhere in the world and on the internet. Previous Mozilla research has found similar issues of moderation evasion among U.S. political influencers and disinformation on other platforms. However, Twitter and Facebook have been under pressure in Kenya for years to clean up their content, whereas TikTok is just emerging into the world of political platforming.
To fix the problem before it gets even worse, the report recommended a few strategies: Increased transparency around platform practices and algorithms, more local partnerships and robust fact-checking, and stronger policies against violent content.