FB, Twitter, TikTok, YouTube’s 2022 anti-misinfo efforts ‘full of empty promises’ – group

Media reform group Free Press advises journalists covering the tech sector to ‘not take anything from the platforms at face value’

MANILA, Philippines — US-based media reform advocacy group Free Press found efforts against disinformation by four major social media platforms Facebook, Twitter, YouTube and TikTok in 2022 to be “weak” and “full of empty promises.”

The group is part of a coalition of more than 60 civil and consumer rights organizations called Change The Terms, which called on companies this year to implement 15 priority reforms that would “address the algorithmic amplification of hatred and lies, would protect users in all languages ​​and increase corporate transparency.

The coalition called for the implementation ahead of the US midterm elections, as US elections have in recent years become a test of whether methods and policies against disinformation have improved.

Free Press found the companies’ efforts sorely lacking, agreeing with the key findings.

The four companies did not provide enough data to show whether there are significant gaps in the application and enforcement of their policies. The problem is exacerbated, the group says, by the difficulty of keeping track, as companies have created a “maze of commitments, announcements and corporate policies”.

Meta-politics fully meets only two of the 15 requirements: prohibiting calls to arms and applying third-party fact-checkers to political ads. It is important to note that TikTok and Twitter do not allow political ads. TikTok is responding to a request, also banning calls to arms.

Caption: Green face – the company responds to demand within a stated policy; Orange face – the company insufficiently or incompletely references the request in a declared policy; red face – the company fails to meet demand; * = cases where it was impossible to assess the performance of a company due to a lack of transparency

The four companies are ‘failing to shut down’ what they call ‘public interest’ or ‘public interest’ exceptions that are granted to politicians and other prominent users, allowing them to post something that may be false . Under the policy, posts can be kept online, with companies claiming what the public figure says is newsworthy. The group calls this policy “arbitrary” and can often simply be used as a free get out of jail card.

Video platforms TikTok and YouTube do not report “denominators” on violent videos, which would provide context on how many people may have viewed the videos or how long the videos remained viewable before they are eventually deleted. The group also said it’s a problem when platforms report content takedowns but don’t provide the full picture.

For example, YouTube previously boasted that it removed more than 4 million non-compliant videos from April to June 2022. But the platform “does not report what the ratio is to all videos that existed on the platform during that time.” . Without such context, it is difficult to assess the percentage of videos on YouTube that are non-compliant.

Complete data needed to support business claims

“While tech companies have promised to tackle misinformation and hate on their platforms this fall, there is a noticeable gap between what companies say they want to do and what they actually do in practice. platforms do not have enough policies, practices, AI or human capital in place to materially mitigate damage before and during the November midterms,” Free Press said.

“We cannot take these companies at their word. We need transparent records of their implementation of security mechanisms and enforcement of their own policies.

Rating each company, Free Press said that while Meta’s regular announcements look promising, “they’re just that: promises.”

The group found instances of publications continuing to spread false claims about US election fraud, such as those targeting election workers, remaining on the platform and “falling through the cracks.” The group also noted slower work on non-English misinformation.

Meta also eliminated a “Responsible Innovation” team of civil rights experts and combined several civic integrity teams, which insiders said was a cost-cutting measure.

The group also found false claims about US voter fraud on TikTok, with a user repeatedly able to join even if TikTok took action. Twitter’s policies lacked detail and “there are discrepancies between Twitter’s election-related blog posts and Twitter’s policies in the Terms of Service”.

The group also said: “YouTube has the biggest gaps in policy protections. The company lacks transparency about its approach to violent content. There are also few details about moderation and enforcement practices (such as such as the existence of civic integrity teams, moderation across languages, etc.).

“Although they claim to have developed and enforced new policies aimed at combating the spread of such toxic content, these claims are difficult for independent auditors to verify. Company websites are a tangle of conflicting policies and standards that are difficult to untangle. Journalists covering the tech sector shouldn’t take anything from the platforms at face value. Each claim should be supported by empirical evidence and an overview of its impact. – Rappler.com

Comments are closed.