Facebook fixed? How EU data law puts pressure on Big Tech.

A new European Union law aims to force social media giants, including Facebook and YouTube, to take action to curb the spread of extremism and misinformation online.

Under the Digital Services Act (DSA), tech companies with more than 45 million users will have to give regulators access to their so-called algorithmic black boxes, further exposing how certain messages — especially those that divide – find themselves at the top of social media newsfeeds.

Why we wrote this

A new European law calls on big tech companies to open their algorithmic “black boxes” and better moderate online speech. The objective is nothing less than to preserve the public square on which democracies depend.

And if platforms recognize patterns that cause harm and fail to act, they will be subject to heavy fines.

“We need to get under the hood of platforms and look at how they amplify and disseminate harmful content such as hate speech,” says Joe Westby, deputy director of Amnesty Tech in Brussels. “The DSA is a landmark law that attempts to hold these big tech companies to account.”

The law can end up having far-reaching effects on corporate behavior, even in the United States. “It’s a classic example of the ‘Brussels effect’: the idea that when Europe regulates, it ends up having a global impact,” says Brookings Institution expert Alex Engler.

Brussels

Sweeping new legislation from the European Union aims to “revolutionize” the internet, forcing social media giants including Facebook and YouTube to take action to curb the spread of extremism and misinformation online.

Known as the Digital Services Act (DSA), it is likely to create ripple effects that could also change the behavior of social media platforms in America.

In one of the most striking requirements of the new law, Big Tech companies with more than 45 million users will have to cede access to their so-called algorithmic black boxes, thus clarifying the end of certain messages – in particular those who divide. at the top of social media newsfeeds.

Why we wrote this

A new European law calls on big tech companies to open their algorithmic “black boxes” and better moderate online speech. The objective is nothing less than to preserve the public square on which democracies depend.

Companies must also implement systems designed to speed up the speed with which illegal content is pulled from the web, prioritizing requests from “trusted flaggers”.

And if platforms recognize patterns that cause harm and fail to act, they will be subject to heavy fines.

“We need to get under the hood of platforms and look at how they amplify and disseminate harmful content such as hate speech,” says Joe Westby, deputy director of Amnesty Tech at Amnesty International in London.

“The DSA is a landmark law that attempts to hold these big tech companies to account,” he adds.

Unlocking the Big Tech Business Model

Big tech companies have long sought to evade regulation by invoking freedom of speech. The DSA believes that while ugly and divisive speech should not be controlled, it should also not be promoted – or artificially amplified.

But to sell ads and collect user data – which they also sell – the big online platforms have done just that.

The key to this business model is to keep users online for as long as possible, in order to collect as much data about them as possible.

And research has shown that what drives people to read and click is the content that drives them crazy, notes Jan Penfrat, senior policy adviser at European Digital Rights, a Brussels-based association.

This in turn prompts Big Tech companies to prioritize and deliver anger-inducing content that prompts users to “react and respond”, he says.

This point was highlighted last year through a wealth of internal documents made public by Frances Haugen, a whistleblower and former data engineer at Facebook.

In leaked corporate communications, an employee laments that extremist political parties celebrate Facebook’s algorithms for rewarding their “provocative strategies” on topics ranging from racism to immigration and the welfare state.

It was one of many examples in those documents of how Facebook’s algorithms appeared to “artificially amplify” hate speech and misinformation.

In an attempt to address this issue, the DSA will require Big Tech companies to conduct and publish annual “impact assessments,” which will examine their “user ecosystem and whether or not — or how — the algorithms recommendations drive traffic,” says Peter Chase, Senior Fellow at the German Marshall Fund in Brussels.

“It asks these big platforms to think about the social impact they have.”

There are insights to be gained from these sorts of regular exercises, analysts say.

Twitter, which has a reputation for publishing self-critical research, released an internal assessment last October that found its own algorithms favored conservative political content over left-leaning.

What they couldn’t figure out, he admitted, was why.

The DSA aims to shed some light on this front by forcing Big Tech companies to open their algorithmic black boxes to academic researchers approved by the European Commission.

In this way, EU officials hope to gain information on, among other things, how Big Tech companies moderate and rank social media posts. “On what basis do they recommend certain types of content over others? Hide it or demote it? asks Mr. Penfrat.

And under the law, if Big Tech companies discover artificial amplification patterns that promote hate speech and misinformation spread by bad actors and bots – what social media companies call “coordinated inauthentic behavior — and don’t take action to stop it, they face devastating fines.

These could represent up to 6% of a company’s global annual sales. Repeat offenders could be banned from operating in the EU.

“They have to do something about it, or they can get caught,” says Alex Angler, researcher in governance studies at the Brookings Institution.

“So they can’t just shrug their shoulders and say, ‘We don’t have a problem. “”

Margrethe Vestager, who chairs the European Commission’s group on a Europe fit for the digital age, speaks at a press conference on the Digital Services Act as well as the Digital Markets Act, at the headquarters of the European Commission in Brussels, on December 15, 2020.

“Weaponized” Ads – and the Law’s Response

Such evasion is precisely what has characterized Big Tech companies so far, and analysts say it’s largely because promoting divisive content has been so hugely profitable.

Mr Penfrat remembers the surprise of the European policymakers he lobbied when he explained the almost unfathomable amount of personal data that tech giants trade – and how they often harness the emotional power of anger through to a “monitoring-based” advertising model.

“Every time you open a website, hundreds of companies bid for your eyeballs,” he says. Within “milliseconds”, ads pushed by data brokers that won the auction are loaded for people to see.

But it’s not just the goods and services that advertisers sell. “Anyone can pay Facebook to promote certain types of content – ​​that’s what ads are. It can be political and problem-oriented,” says Penfrat.

And bad actors have taken advantage of this, he notes, pointing out how the Russian government has “weaponized” the ads to push its favorite candidates in the US elections and justify the war in Ukraine.

The DSA will prohibit the use of sensitive data, including race and religion, to target ads, and will also ban ads directed at children. It also makes it illegal to use so-called dark patterns, manipulative practices that trick people into agreeing to things like letting online companies track their data.

Additionally, it forces big tech companies to speed up their processes for removing illegal posts — including terrorist content, so-called revenge pornography, and hate speech in some countries that ban it — in part by giving priority to recommendations from “trusted flaggers”. which could include EU approved non-profit groups.

Likewise, if companies remove content that they believe violates these rules, they must notify those whose posts are removed, explain why, and have procedures for appeal.

“You have these mechanisms today, but they are very non-transparent,” says Mr. Penfrat. “You can appeal but never get a response.”

A European law with American effects

The DSA has been received by data policy experts with a mixture of skepticism and praise – with some concerns expressed about unintended damage to competition or the diversity of online discourse.

Still, the DSA is expected to drive policy in the United States as well as Europe, says Engler, who studies the impact of data technologies on society.

“It’s a classic example of the ‘Brussels effect’: the idea that when Europe regulates, it ends up having a global impact,” he adds. “Platforms don’t want to build a different infrastructure depending on whether the IP address is in Europe or not.”

And since academics are able to delve into the black boxes of Big Tech, the mitigations they suggest will not only be a good starting point for public debate, but could also inspire America.

During Ms. Haugen’s whistleblower testimony, US lawmakers signaled that they might be open to the kinds of regulations the DSA puts in place.

At a press conference following congressional testimony last October, Senator Richard Blumenthal, a Democrat from Connecticut, marveled at the bipartisan agreement on the need for reform.

“If you closed your eyes, you wouldn’t even know if it was a Republican or a Democrat,” he said. “Every part of the country suffers the misdeeds inflicted by Facebook and Instagram.”

US and European regulators say this is true on both sides of the Atlantic.

Comments are closed.