Why we need an international body to rein in hate speech during conflict
‘It’s time to consider solutions that reflect the magnitude of the problem.’
Source: The New Humanitarian
Online hate speech and disinformation have long incited violence, and sometimes mass atrocities. When this has happened in the Global South, from Ethiopia to Myanmar, much of the world has looked away. But the war in Ukraine means no one can now ignore how social media is being weaponised in conflict. It’s time to consider solutions that reflect the magnitude of the problem, including an international panel with enough teeth to make a difference.
Twitter responded to Ukraine by taking those seen to be inflaming the conflict offline – though its efforts were deemed insufficient by the Ukrainian government and its allies. But Meta, Facebook’s parent company, then made a controversial change to its hate speech policy, allowing posts that would normally violate its rules, including calls for violence, such as “death to Russian invaders”.
Poland, Lithuania, Estonia, and Latvia, meanwhile, sent an impassioned plea to executives at Google, Twitter, and Facebook demanding they censor and take down accounts justifying the war or praising war crimes, many of which came from Russia. They called for government accounts to be suspended, including state-controlled media in Russia and Belarus.
But this is far from the first time such problems have arisen.
In 2018, a UN fact-finding mission found social media – Facebook in particular – to have had a “determining role” in suspected genocide in Myanmar. In 2021, whistleblower Frances Haugen brought renewed attention to concerns about online speech and offline violence when she argued that Facebook was “literally fanning ethnic violence” in Ethiopia, and continued to do so in Myanmar.
Issues of censorship, online hate speech, and disinformation during conflict are a huge concern given their potential to exacerbate violence and erode election integrity. Yet how and when social media companies respond has long been an overlooked and under-addressed problem, particularly in countries beyond their priority (read: profitable) markets.
Online community standards don’t exist in many non-European languages, and social media companies have been slow to adapt their platforms accordingly.
One of the main challenges is that many governments have neither the capacity nor the tools to effectively address these kinds of issues. As a consequence, they often resort to internet shutdowns as a form of control.
Furthermore, online community standards don’t exist in many non-European languages, and social media companies have been slow to adapt their platforms accordingly. For example, Facebook’s own research indicates that its algorithms incorrectly delete Arabic content more than three out of four times. And given that Amharic, Swahili, and Somali are much less prevalent and low-resource (less text and material for AI to be trained on), accurate automated content removal for these languages happens even less often.
The imbalance of economic power between the Global South and social media companies – their valuations can be many times the GDP of poorer countries – means the tech titans pay little heed to the concerns of hate speech or disinformation campaigns in such nations, not least because they are marginal markets. There must be more dialogue, more engagement, but also new proposals to drive forward constructive solutions.
A new panel?
As greater self-moderation by large corporations remains a distant prospect, an Information Intervention Council, operating within a human rights framework and ideally grounded in the UN or supranational organisations such as the African Union, is one of the few ways to effectively address online hate and disinformation in times of war and conflict.
Such a council would oversee, advise, support, and guide interventions to increase transparency and accountability in the process.
The body would be responsible for conducting research: for example, independent investigations into the role of social media platforms in spreading hate during conflicts (recently recommended by Meta’s Oversight Board but yet to be actioned by the company), or verifications of the roles played by other media in potential target states.
It could also establish guidelines for information interventions in violent conflicts, and recommend which actions might be taken by particular actors. It would act as a key pillar of support for the development of those guidelines to nudge the private sector – social media companies in particular – to comply with specific standards in times of conflict.
Given the aim of the body would not be to solve disputes or interpret international law, it should not be structured like an international tribunal, but rather take the form of a dynamic council, hosting members committed to addressing specific situations.
In addition to members representing any international organisations involved, temporary members should include representatives of social media companies operating in conflict zones, experts in both the media and the responsibility to protect fields, as well as members of civil society organisations.
This would not be the first time international actors, such as the UN, have intervened to halt incendiary media. And such actions can make a difference.
For example, during the Rwandan genocide in 1994, international forces overlooked the role radio had in mobilising violence. As a result of the lessons learnt, NATO forces became more interventionist in their response to propaganda in Bosnia and Herzegovina, targeting media outlets and seizing radio transmitters that were found to be mobilising violence.
At the time, there were calls for formalising these efforts to address the spread of inflammatory speech through the establishment of an information intervention unit, among other solutions, and there were attempts to craft the legal and policy tools to enable media interventions within a human rights framework in times of conflict. However, these initiatives were not consolidated, and this kind of action has never effectively been translated to the world of social media.
The responsibility for fixing these problems can’t just be left to the companies that are doing little, nor to the governments that are increasingly inclined to impose blanket internet shutdowns.
There are, inevitably, risks: Formalising information interventions might legitimise moves towards certain modes of censorship; the threat of intervention could also disincentivise social media actors from operating in conflict-affected countries, resulting in collateral censorship, removing social media spaces for those who need it for information and communication, especially in times of war.
The reality for most countries in the Global South is that setting up better systems to monitor and remove harmful online content is likely still a long way off. However, the responsibility for fixing these problems can’t just be left to the companies that are doing little, nor to the governments that are increasingly inclined to impose blanket internet shutdowns.
This global challenge requires a global response grounded in international human rights norms. An international body to intervene would offer a legitimate pathway for addressing the most significant concerns of how online speech causes offline harm.