Facebook and Twitter release 2022 midterm policies to fight the Big Lie

Comment

For months, activists have urged tech companies to tackle the spread of lies claiming the 2020 presidential election was stolen – warning that such misinformation could delegitimize the 2022 midterm elections, in which all seats in the House of Representatives and more than a third of the Senate are up for grabs.

Yet the social media giants are moving forward with a familiar playbook for police misinformation this election cycle, even as false claims that the last presidential election was fraudulent continue to plague their platforms.

Facebook again chooses not to remove certain election fraud allegations and may instead use labels to redirect users to specific information about the election. Twitter says it will apply misinformation labels or remove posts that undermine trust in the electoral process, such as unverified claims of vote-rigging about the 2020 race that violate its rules. (The company didn’t say when it would remove the offensive tweets, but said the labeling reduced its visibility.)

This contrasts with platforms, such as YouTube and TikTok, which ban and remove claims of 2020 election rigging, according to recently released election plans.

Disinformation experts warn that the strictness of corporate policies and how they enforce their rules could be the difference between a peaceful transfer of power and an election crisis.

“The ‘Big Lie’ has become entrenched in our political discourse, and it has become a talking point for election deniers to preemptively declare that the midterm elections are going to be stolen or filled with voter fraud,” said Yosef Getachew, a media and democracy program director at the watchdog of the liberal Common Cause government. “What we’ve seen is that Facebook and Twitter aren’t really doing the best job or any job in terms of suppressing and countering misinformation around the ‘big lie’.”

The political stakes of these content moderation decisions are high, and the most effective path forward is unclear, especially as companies balance their desire to support free expression with their interest in preventing offensive content. on their networks endangers people or the democratic process.

EXCLUSIVE Election deniers march to power on key 2024 battlegrounds

In 41 states that held nominating contests this year, More than half of GOP winners to date — about 250 candidates in 469 contests — have embraced Trump’s false claims about his defeat two years ago, according to a recent Washington Post analysis. In the 2020 battleground states, candidates who deny the legitimacy of this election have claimed nearly two-thirds of GOP nominations for state and federal offices with election authority., according to the analysis.

And these candidates are taking to social media to spread their election-related lies. According to a recent report by Advance Democracy, a nonprofit that studies disinformation, the candidates endorsed by Trump and those linked to the QAnon conspiracy theory have posted voter fraud claims on Facebook and Twitter hundreds of times, drawing hundreds of thousands of interactions and retweets.

The findings follow months of revelations about the role of social media companies in facilitating the “stop the steal” movement that led to the siege of the US Capitol on January 6. A Washington Post and ProPublica investigation earlier this year found that Facebook was hit by a deluge of posts – at the rate of 10,000 a day – attacking the legitimacy of Joe Biden’s victory between Election Day and the 6 January. Facebook groups, in particular, have become incubators for President Trump’s baseless claims of election rigging before his supporters stormed the Capitol, demanding he get a second term.

“Candidates who don’t concede aren’t necessarily new,” said Katie Harbath, former director of public policy at Facebook and a technology policy consultant. “He … poses an increased risk [now] because it is accompanied by a [higher] threat of violence,” though it’s unclear if that risk is the same this year as it was in the 2020 race when Trump was on the ballot.

Social media posts about voter fraud still prevalent, study finds

Facebook spokesperson Corey Chambliss has confirmed that the company will not outright remove posts from ordinary users or candidates who claim there is widespread voter fraud, that the 2020 election was rigged. or that the upcoming 2022 midterm elections are fraudulent. Facebook, which rebranded itself as Meta last year, bans content that violates its rules against incitement to violence, including threats of violence against election officials.

Social media companies such as Facebook have long preferred to take a hands-off approach to risky political content to avoid having to make tough calls to find out which posts are true.

And while platforms have often been willing to ban posts that seek to confuse voters about the electoral process, their decisions to take action against more subtle forms of voter suppression — particularly from politicians – have often been politically strained.

They have often been criticized by civil rights groups for failing to adopt policies against more subtle messages designed to sow doubt in the electoral process, such as assertions that it is not worth it for black people to to vote or that voting is not worth it because of the long queues.

The midpoints are there. Critics say Facebook is already behind.

During the run-up to the 2020 election, civil rights groups lobbied Facebook to expand its voter suppression policy to address some of these indirect attempts to manipulate the vote and enforce their rules to Trump’s comments more aggressively. For example, some groups have argued that Trump’s repeated messages questioning the legitimacy of mail-in ballots could discourage vulnerable populations from participating in elections.

But when Twitter and Facebook attached labels to some of Trump’s posts, they were criticized by conservatives for saying their policies discriminated against right-wing politicians.

These decisions are further complicated by the fact that it is not entirely clear whether the labels are effective in combating user perceptions, experts say. According to Joshua Tucker, a professor at New York University, warnings that posts might be misleading could raise questions about the veracity of the content or could have a backlash effect for people who already believe in such conspiracies.

A user may look at a label and think “”oh I should [question] this information,” Tucker said. Or a user might see a warning label “and say ‘oh, that’s yet more evidence that Facebook is biased against conservatives’.”

The blind spots of tech: sharing with researchers and listening to users

And even if tags work on one platform, they may not work on another, or they may direct people who are annoyed by them to other platforms with more permissive content moderation standards.

Facebook said users complained its election-related tags were overused, according to a Publish of Global Affairs President Nick Clegg and that the company is considering employing a more suitable strategy this cycle. Twitter, on the other hand, said it had positive results last year by testing new misinformation labels on debunked content that redirected people to accurate information, according to a blog post. Publish.

Yet the specific policies adopted by social media giants may be less important than the resources they deploy to detect and deal with posts that break the rules, experts say.

“There are so many unanswered questions about the effectiveness of enforcing these policies,” Harbath said. “How is all of this actually going to work in practice?”

Comments are closed.