Ukraine forces Facebook, TikTok, YouTube and Twitter to rethink their rules

Removals illustrate how Internet platforms have hastened to adapt content policies built around notions of political neutrality to a context of war. And they suggest those rulebooks – the ones that govern who can say what online – need a new chapter on geopolitical conflict.

“Companies are setting precedent as they go,” says Katie Harbath, CEO of tech policy consultancy Anchor Change and former director of public policy at Facebook. “Part of my concern is that we’re all thinking about the short term” in Ukraine, she says, rather than the underlying principles that should guide how platforms approach wars around the world.

Acting quickly in response to a crisis is not a bad thing in itself. For technology companies that have become the de facto gatekeepers of online information, reacting quickly to world events and changing the rules when necessary, is essential. Overall, the social media giants showed an unusual willingness to take a stand against the invasion, prioritizing their responsibilities to Ukrainian users and their ties to democratic governments over their desire to remain neutral, even at the price of being banished from Russia.

The problem is that they piggyback their responses to war onto the same global, one-size-fits-all frameworks they use to moderate peacetime content, says Emerson T. Brooking, senior resident fellow at the Digital Council of the Atlantic Forensic Research Laboratory. And their often opaque decision-making processes leave their policies vulnerable to misinterpretations and questions of legitimacy.

Big tech companies now have playbooks for terror attacks, elections, and pandemics — but not wars.

According to Brooking, what platforms such as Facebook, Instagram, YouTube and TikTok need is not another hard set of rules that can be generalized to every dispute, but one-time process and protocols. of war that can be applied flexibly and contextually when fights break out – loosely analogous to the commitments made by technology companies to tackle terrorist content after the 2019 Christchurch massacre in New Zealand. Facebook and other platforms have also developed special protocols over the years for elections, ranging from “war rooms” that monitor foreign interference or disinformation campaigns to policies specifically prohibiting misinformation about how to vote, as well as than for the covid-19 pandemic.

The war in Ukraine should give them the impetus to think in the same systematic way about the kind of political “ice-breaker” measures that might be needed specifically in case of wars, uprisings or sectarian fighting, says Harbath of Anchor Change – and what would be the criteria for applying them, not only in Ukraine but in conflicts around the world, including those that receive less public and media attention.

Facebook, for its part, has at least started down this road. The company says it started build dedicated teams in 2018 to “better understand and address how social media is used in countries in conflict” and that it hired more people with local and domain expertise in Myanmar and Ethiopia. Still, his actions in Ukraine — which had struggled to get Facebook’s attention on Russian misinformation as early as 2015 — show he still has work to do.

Atlantic Council’s Brooking thinks Facebook probably made the right decision by asking its moderators not to enforce the company’s normal rules on calls for violence against Ukrainians expressing outrage over the Russian invasion . Banning Ukrainians from saying anything mean about Russians online while their cities are being bombed would be cruelly brutal. But the way these changes were revealed – via a leaked to Reuters news agency – led to misinterpretations, which Russian leaders capitalized on to demonize the company as Russophobic.

After an initial backlash, including threats from Russia to ban Facebook and Instagram, parent company Meta clarified that calling for the death of Russian leader Vladimir Putin was still against its rules, perhaps hoping to save its presence there. If so, it didn’t work: Russian court on Monday officially decreed the banand Russian authorities are pushing Meta to lead an “extremist organization” in a crackdown on speech and media.

In reality, Meta’s moves appear to have been consistent with his approach in at least some prior conflicts. As Brooking noted in Slate, Facebook also seems to have quietly relaxed its application rules prohibiting calling or glorifying violence against the Islamic State in Iraq in 2017, against the Taliban in Afghanistan last year and on both sides of the war between Armenia and Azerbaijan in 2020. If l he company hoped that changing its moderation guidelines piecemeal and in secret for each conflict would allow it to avoid scrutiny, the Russia debacle proves otherwise.

Ideally, in the event of war, the tech giants would have a framework to make such heavy-handed decisions in conjunction with experts in human rights, internet access and cybersecurity, as well as experts in region in question and possibly even officials from the relevant governments, Brooking suggests.

In the absence of established processes, major social platforms have ended up banning Russian state media in Europe reactively rather than proactively, presenting it as compliance with demands from the European Union and European governments. Meanwhile, the same accounts remained active in the United States on some platforms, reinforcing the perception that withdrawals were not their choice. This risks setting a precedent that could come back to haunt businesses when authoritarian governments in the future demand bans on outside media outlets or even opposition parties in their own countries.

Wars also pose particular challenges to notions of political neutrality, misinformation, and representations of the graphic violence of tech platforms.

US-based tech companies have clearly chosen a side in Ukraine, and it comes at a cost: Facebook, Instagram, Twitter and now Google News have all been blocked in Russia, and YouTube could be next.

Yet the companies have not clearly defined the basis on which they have taken this position, or how it might apply in other contexts, from Kashmir to Nagorno-Karabakh, Yemen and the West Bank. While some, including Facebook, have developed overall state media policiesothers have cracked down on Russian media without specifying the criteria on which they might take similar action against, say, Chinese state media.

Harbath, the former head of Facebook, said a hypothetical conflict involving China is the kind of thing tech giants — along with other major Western institutions — should plan ahead for now, rather than relying on the reactive approach they used in Ukraine.

“It’s easier said than done, but I’d like to see them build their capacity for longer-term thinking,” says Harbath. “The world continues to move from crisis to crisis. They need a group of people who won’t get caught up in the day-to-day,” who can “think about some of the playbooks” they will turn to in future wars.

Facebook, Twitter and YouTube have embraced the concept of “misinformation” as a descriptor for false or misleading content about voting, covid-19 or vaccines, with mixed results. But the war in Ukraine highlights the inadequacy of this term to distinguish between, say, pro-Russian disinformation campaigns and pro-Ukrainian myths such as the “Ghost of Kyiv.” Both may be factually dubious, but they play very different roles in the battle for information.

The platforms seem to understand this intuitively: there has been no widespread crackdown on Ukrainian media for spreading what could rightly be considered resistance propaganda. Yet they still struggle to adapt old vocabulary and policies to such distinctions.

For example, Twitter justified the removal of Russian misinformation about the Mariupol hospital bombings under its policies on “abusive behavior” and “deny mass casualty eventsthe latter was designed for behavior such as Alex Jones’ rejection of the Sandy Hook shooting. YouTube cited a similar 2019 policy on “hateful” content, including Holocaust denial, announcing that it ban any video minimizing the Russian invasion.

When it comes to depictions of graphic violence, it makes sense that a platform such as YouTube prohibits, for example, videos of corpses or murders under normal circumstances. But in wars, such images could be crucial evidence of war crimes, and removing them could help perpetrators cover them up.

YouTube and other platforms have exemptions to their policies for newsworthy or documentary content. And, to their credit, they seem to treat such videos and images with relative care in Ukraine, says Dia Kayyali, associate advocacy director at Mnemonic, a nonprofit devoted to archiving evidence of human rights abuses. But this raises questions of consistency.

“They are doing a lot of things in Ukraine that defenders around the world have asked of them in other circumstances, which they have been unwilling to provide,” Kayyali says. In the Palestinian territories, for example, platforms suppress “a lot of political speech, a lot of people speaking out against Israel, against human rights abuses.” Facebook has also been accused in the past of censoring posts highlighting police brutality against Muslims in Kashmir.

Of course, it’s not just tech companies that have paid greater attention to — and taken a stronger stance on — Ukraine than other human rights crises around the world. The same could be said of the media, governments and the general public. But for the Silicon Valley giants who pride themselves on being global and systematic in their outlook — even if their actions don’t always reflect it — a more cohesive set of criteria for responding to disputes seems like a reasonable request.

“I would like to see the level of contextual analysis that Meta does for its exceptions to the rules against inciting violence against Russian soldiers, and their praise for the Azov Battalion— the neo-Nazi Ukrainian militia that resisted the Russian invasion — applied to conflicts in the Arabic-speaking world, Kayyali says. “It’s not too late for them to start doing some of these things elsewhere.”

Comments are closed.