YouTube outlines its ever-evolving efforts to combat the spread of harmful misinformation

YouTube has provided a new preview of its the evolution of efforts to combat the spread of misinformation via YouTube clipswhich sheds light on the various challenges the platform is facing and how it is looking at its options for dealing with these issues.

This is a critical issue, with YouTube, along with Facebook, being regularly identified as a primary source of misleading and potentially harmful contentwith viewers sometimes off the hook deeper and deeper rabbit holes of misinformation via YouTube recommendations.

YouTube says it’s working to address this issue and is focusing on three key elements in this push.

The first element is detecting misinformation before it gains traction, which YouTube says can be particularly difficult with new conspiracy theories and misinformation pushes, as it can’t update its algorithms. automated detection without a significant amount of content to train its systems.

Automated detection processes are built on examples, and for older conspiracy theories this works very well, as YouTube has enough data to feed, to train its classifiers on what to detect and limit. But the new changes complicate things, presenting a different challenge.

YouTube says it is considering various ways to update its processes on this front and limit the spread of evolving harmful content, particularly around developing news stories.

“For major news events, like a natural disaster, we develop news panels to direct viewers to text articles for major news events. For niche topics that the media might not cover , we provide viewers with factual checkboxes. But fact-checking also takes time, and not all emerging topics will be covered. In these cases, we explored other types of tags to add to a video or above search results, such as a disclaimer warning viewers that high-quality information is missing.

This, ideally, will broaden its ability to detect and limit emerging narratives, although this will still remain a challenge in many ways.

The second element of interest is cross-platform sharing and amplifying YouTube content outside of YouTube itself.

YouTube says it can implement any changes it wants in its app, but if people re-share videos on other platforms or embed YouTube content on other websites, it makes it harder. for YouTube to restrict its distribution, which leads to further challenges to mitigate them.

“One possible way to fix this is to disable the share button or unlink on videos which we already limit in recommendations. This effectively means that you cannot embed or link to a borderline video on a another site. But we wonder if preventing shares might go too far in restricting a viewer’s freedoms. Our systems reduce the content limit in recommendations, but sharing a link is an active choice that a person can do, distinct from a more passive action like watching a recommended video.

That’s a key point – while YouTube wants to restrict content that could promote harmful misinformation, if it’s not technically in violation of the platform’s rules, to what extent can YouTube work to limit that, without exceed the limit ?

If YouTube can’t limit the delivery of content through sharing, that’s still a significant vector of harm, so it needs to do something about it, but the tradeoffs here are significant.

“Another approach could be to pop up an interstitial that appears before a viewer can watch an embedded or borderline-linked video, letting them know that the content may contain misinformation. Interstitials are like a speed bump: the step additionally forces the viewer to stop before watching or sharing content. In fact, we already use interstitials for age-restricted content and violent or graphic videos, and consider them an important tool to give viewers a choice of what they are about to watch.

Each of these proposals would be seen by some as overkill, but they could also limit the spread of harmful content. When, then, does YouTube become a publisher, which could subject it to existing editorial policies and processes?

There are no easy answers in any of these categories, but it is interesting to consider the different elements at play.

Finally, YouTube claims that it is expanding its disinformation efforts globally, due to differing attitudes and approaches to news sources.

“Cultures have different attitudes about what makes a source trustworthy. In some countries, public broadcasters like the BBC in the UK are widely regarded as providing authoritative information. Meanwhile, in others, state broadcasters can get close to propaganda. Countries also post a range of content within their news and information ecosystem, from outlets that require high fact-checking standards to those that have little oversight or verification. And political environments, historical contexts, and current events can lead to hyperlocal disinformation narratives that appear nowhere else in the world. For example, during the Zika outbreak in Brazil, some blamed the disease on international conspiracies. Or recently in Japan, false rumors spread online that an earthquake was caused by human intervention.

The only way to combat this is to hire more staff in each region and create more localized centers and content moderation processes to take into account regional nuances. Even then, there are considerations as to how the restrictions potentially apply across borders – should a warning displayed on content in one region also appear in others?

Again, there are no definitive answers, and it’s interesting to consider the various challenges YouTube faces here as it works to evolve its processes.

You can read YouTube’s full overview of the evolution of its misinformation mitigation efforts here.

Comments are closed.