September 19, 2021

Reviews | TikTok, YouTube, and Facebook want to appear trustworthy. Don’t be fooled.

Meanwhile, YouTube is touting its transparency efforts, claiming in 2019 that it “launched more than 30 different changes to reduce borderline content recommendations and harmful disinformation,” resulting in “an average 70% drop in the length of time you watch such content from unsubscribed recommendations in the United States. ” However, without any way to verify these statistics, users have no real transparency.

Just as polluters are greening their products by adorning their packaging with green images, major tech platforms are opting for style, not substance.

Platforms like Facebook, YouTube and TikTok have good reasons to reject more comprehensive forms of transparency. More and more internet platforms are relying on AI systems to recommend and organize content. And it is clear that these systems can have negative consequences, such as voter misinformation, the radicalization of vulnerable people, and the polarization of large parts of the country. Mozilla’s YouTube research proves it. And we’re not alone: ​​the Anti-Defamation League, the Washington Post, the New York Times, and the Wall Street Journal have come to similar conclusions.

The dark side of AI systems can be harmful to users, but these systems are a gold mine for platforms. Rabbit holes and outrageous content entice users to watch and therefore consume advertising. By allowing researchers and lawmakers to dig into systems, these companies are moving down the road of regulation and public pressure for more trustworthy – but potentially less lucrative – AI platforms are also opening up to more strong criticism; the problem is probably deeper than we think. After all, surveys so far have been based on limited data sets.

As tech companies master false transparency, regulators and civil society at large must not fall into the trap. We have to call the style disguised in substance. And then we have to go further. We need to describe what true transparency looks like and demand it.

What does real transparency look like? First, it should apply to the parts of the internet ecosystem that affect consumers the most, like AI-powered ads and recommendations. In the case of political advertising, platforms must meet the basic demands of researchers by introducing databases with all relevant information that is easy to search and navigate. In the case of recommendation algorithms, platforms should share crucial data such as recommended videos and why, and also create recommendation simulation tools for researchers.

Transparency should also be designed to benefit everyday users, not just researchers. People should be able to easily identify why specific content is recommended to them or who paid for that political ad in their feed.


Source link