Facebook has misinformation issue and is blocking access to data on how much there is and who is affected

0

Leaked internal documents suggest that Facebook – which recently rebranded itself as Meta – is doing much worse than it claims to downplay misinformation about the COVID-19 vaccine on the social media platform Facebook.

Online misinformation about the virus and vaccines is a major concern. In one study, survey respondents who got some or all of their news from Facebook were much more likely to resist the COVID-19 vaccine than those who got their news from mainstream media sources.

As a researcher who studies social and civic media, I think it’s extremely important to understand how misinformation spreads online. But it’s easier said than done. Simply counting the cases of disinformation found on a social media platform leaves two key questions unanswered: how likely is it that users will encounter disinformation, and are some users particularly likely to be affected. through disinformation? These questions are the denominator problem and the distribution problem.

The COVID-19 disinformation study, “Facebook’s Algorithm: a Major Threat to Public Health,” published by public interest group Avaaz in August 2020, reported that sources who frequently share health disinformation – 82 websites and 42 Facebook pages – had an estimated total of 3.8 billion views in one year.

At first glance, that’s a surprisingly large number. But it’s important to remember that this is the numerator. To understand what 3.8 billion views mean in a year, you also need to calculate the denominator. The numerator is the part of a fraction above the line, which is divided by the part of the fraction below the line, the denominator.

Get some perspective

A possible denominator is 2.9 billion active Facebook users per month, in which case, on average, every Facebook user has been exposed to at least one piece of information from these sources of health misinformation. But that’s 3.8 billion views of content, not discrete users. How much information does the average Facebook user encounter in a year? Facebook does not disclose this information.

Without knowing the denominator, a numerator doesn’t tell you much.
The United States Conversation, CC BY-ND

Market researchers estimate that Facebook users spend 19 minutes per day to 38 minutes per day on the platform. While Facebook’s 1.93 billion daily active users see an average of 10 posts in their daily sessions – a very conservative estimate – the denominator of those 3.8 billion pieces of information per year is 7,044 billion billion (1 , 93 billion daily users times 10 daily posts times 365 days in a year). This means that approximately 0.05% of the content on Facebook is posted by these suspicious Facebook pages.

The figure of 3.8 billion views encompasses all content posted on those pages, including harmless health content, so the proportion of Facebook posts that are health misinformation is less than one-twentieth of a year. one percent.

Is it worrying that there is enough misinformation on Facebook that everyone has probably encountered at least one case? Or is it reassuring that 99.95% of what is shared on Facebook does not come from the sites that Avaaz warns against? Or.

Dissemination of disinformation

In addition to estimating a denominator, it is also important to consider the distribution of this information. Is everyone on Facebook also likely to encounter health misinformation? Or are people who identify as anti-vaccine or are looking for information on “alternative health” more likely to encounter this type of misinformation?

Another social media study focusing on extremist content on YouTube offers a method for understanding the spread of disinformation. Using browser data from 915 internet users, an Anti-Defamation League team recruited a large, demographically diverse sample of US internet users and oversampled two groups: heavy YouTube users and individuals who showed signs of strong negative racial or gender bias in a set of questions. requested by investigators. Oversampling involves surveying a small subset of a population more than its proportion of the population in order to better record data on the subset.

The researchers found that 9.2% of participants viewed at least one video from an extremist channel and 22.1% viewed at least one video from an alternative channel, during the months covered by the study. An important piece of context to note: a small group of people were responsible for most of the views in these videos. And more than 90% of views of extremist or “alternative” video were recorded by people who reported a high level of racial or gender-based resentment during the pre-study survey.

While about 1 in 10 people found extremist content on YouTube and 2 in 10 found content from right-wing provocateurs, most of the people who encountered such content “bounced back” and went elsewhere. The group that found extremist content and looked for more were people who presumably had an interest: people with strong racist and sexist attitudes.

The authors concluded that “the consumption of this potentially dangerous content is rather concentrated among Americans who are already very angry with racism”, and that YouTube’s algorithms may reinforce this pattern. In other words, just knowing the fraction of users who encounter extreme content doesn’t tell you how many people are consuming it. For this you also need to know the distribution.

Superspreaders or Whack-a-mole?

A widely publicized study by the hate speech advocacy group Center for Countering Digital Hate titled Pandemic Profiteers showed that out of 30 anti-vaccine Facebook groups examined, 12 anti-vaccine celebrities were responsible for 70% of the content disseminated in these groups, and the three most important were responsible for almost half. But again, it’s critical to ask questions about the denominators: How many anti-vaccine groups are hosted on Facebook? And what percentage of Facebook users encounter the type of information shared in these groups?

Without information on denominators and distribution, the study reveals something interesting about these 30 anti-vaccine Facebook groups, but nothing about medical disinformation on Facebook as a whole.

Hand holds smartphone displaying message from Facebook about limiting COVID-19 disinformation
Facebook says it is fighting COVID-19 disinformation on its platforms, but without knowing the extent of the problem, there is no way to judge the company’s efforts.
Andrew Caballero-Reynolds / AFP via Getty Images

These types of studies beg the question, “If searchers can find this content, why can’t social media platforms identify and remove it?” The Pandemic Profiteers study, which suggests that Facebook could solve 70% of the problem of medical disinformation by deleting only ten accounts, explicitly advocates the misrepresentation of these disinformation dealers. However, I discovered that 10 of the 12 anti-vaccine influencers featured in the study have already been deleted by Facebook.

Consider Del Bigtree, one of the top three disseminators of vaccination misinformation on Facebook. The problem isn’t that Bigtree is recruiting new anti-vaccine followers on Facebook; is that Facebook users follow Bigtree on other websites and contribute its content to their Facebook communities. It’s not 12 individuals and groups posting misinformation about health online – it’s probably thousands of individual Facebook users sharing misinformation found elsewhere on the web, featuring these dozen of people. Banning thousands of Facebook users is much harder than banning 12 anti-vaccine celebrities.

This is why denominator and distribution issues are essential to understanding disinformation online. The denominator and the distribution allow researchers to ask how common or rare online behaviors are, and who engages in those behaviors. If millions of users are each occasionally confronted with incorrect medical information, warning labels can be an effective response. But if medical disinformation is primarily consumed by a smaller group who actively seek out and share this content, these warning labels are most likely unnecessary.

[You’re smart and curious about the world. So are The Conversation’s authors and editors. You can read us daily by subscribing to our newsletter.]

Get the right data

Trying to figure out disinformation by counting it, regardless of denominators or distribution, is what happens when good intentions collide with bad tools. No social media platform allows researchers to accurately calculate the importance of a particular content on its platform.

Facebook limits most searchers to its Crowdtangle tool, which shares information about content engagement, but it’s not the same as content views. Twitter explicitly bans researchers the calculation of a denominator, either the number of Twitter users or the number of tweets shared in a day. YouTube makes it so difficult to know how many videos are hosted on their service that Google regularly asks interview candidates to estimate the number of YouTube videos hosted to assess their quantitative skills.

Executives of social media platforms have argued that their tools, despite their problems, are good for society, but this argument would be more compelling if researchers could independently verify this claim.

As the societal impacts of social media become more important, the pressure on big tech platforms to publish more data about their users and their content is likely to increase. If these companies respond by increasing the amount of information researchers can access, take a very close look: will they let researchers study the denominator and distribution of online content? And if not, are they afraid of what the researchers will find?



Source link

Leave A Reply

Your email address will not be published.