IPG calls out Facebook and YouTube for brand-damaging and ‘inconsistent’ disinformation policies


A new report from IPG Mediabrands and Magna looked at the extent of misinformation on social media and where brands are most at risk. herethat’s what you need to know.

After a series of high-profile brand safety breaches on social platforms, brands are reassessing where to invest their ad spend. While this has yet to advance the needle of the current trend of increasing digital ad spend, these incidents are redefining and formalizing the relationship between brands and the platforms they choose to spend on.

At the same time, calls for monitoring and regulation are gathering momentum, and the fallout from the revelations of Facebook files worry advertisers about not being fully informed about the practices and effectiveness of advertising on certain channels.

This movement gained momentum with the publication of a report by IPG Mediabrands on the extent of disinformation and disinformation on social platforms. The report distinguishes between practices in place across social media to eliminate “misleading content” and, most importantly, advises brands to reconsider their spending to ensure they are not caught in security concerns. branding, thereby helping the industry to force changes across.

Joshua Lowcock, global head of brand safety at Mediabrands network agency UM Worldwide, said: Global harm to society and brands.

Labyrinth of moderation

The report examines how each of the major social platforms, from Facebook to Twitch, are responding to potential misinformation from their users. He notes that the sheer complexity of understanding each platform’s policies is a barrier to understanding where to spend.

For example, the report notes that Reddit delegates responsibility for ensuring there is no harmful content to volunteer moderators, even within “safe” subreddits that may contain advertising. In contrast, Facebook and YouTube each refer to third parties, including Wikipedia as “proactive” measure when the potential for misinformation arises.

The report notes that some of the bigger players have what it calls “conditional” policies regarding certain types of disinformation, which further muddies the waters (see table below).

Facebook, Instagram, YouTube and Twitter do not have strict policies on disinformation and disinformation, “often leaving a lot of gray and, subsequently, wiggle room for supporters of disinformation and disinformation to circumvent the policies “, according to the report.

Even on the most regulated and specific platforms, such as LinkedIn, users typically have a lot more leeway to post misinformation than brands and advertisers, even though both types of content coexist in the feed.

For additional levels of confusion, some platforms, including Twitch and Snapchat, take users’ off-platform activity into account when it comes to moderating their behavior. The different levels of rigor, the use of third parties, and the policies for different types of disinformation currently mean that there is often confusion about what counts as “brand safe” on each platform.

The damage is done right now

As the report makes clear, the lack of effective disinformation policies is already causing problems for brands. He specifically cites the dangers of vaccine misinformation as a flashpoint over the past 18 months, citing research from the Global Disinformation Index which found 189 ad-supported domains serving Covid-19 disinformation.

However, it is essential for brands and advertisers to note that ad networks – and not just social platforms – are responsible for allowing these sites to profit from misinformation. IPG Mediabrands also suggests that it is up to brands and advertisers to be proactive and ensure that their ad spend does not fund these sites, rather than delegating this responsibility elsewhere.

The report notes that damage is often done to the brand rather than the platforms or networks they advertise on, with Elijah Harris, executive vice president of global digital partnerships and media accountability at Magna, saying : “Responsible brands want to ensure that their messages are seen and shared with the right content on the platforms. Consumers also watched this space to see how platforms are adapting their moderation and enforcement techniques to curb content seeking to spread fake news in a coordinated fashion. “

Platform police

Rather than a call to stop ad spending on problematic social platforms, the report points out that the problem is industry-wide, replicated on some ad networks as well, and that bThe rand have a key role to play in ensuring that they don’t inadvertently fund disinformation. One way to do this is to rely on platforms to improve.

Harrison Boys, director of standards and investment products EMEA at Magna and author of the study, said: “Marketers are right to be concerned when they find their advertising close to misleading content because it is uncontrolled. , it could harm their reputation and the communities they serve. . The industry, which has joined forces against hate speech online and supported online privacy, must now take a stand against misinformation and misinformation.

He suggests that brands can do this by using organizations like NewsGuard, which showcases “quality” journalism media that can be supported by ad spending. It’s important to note, however, that even these third-party quality reviewers don’t always align with what counts as a reliable, disinformation-free environment. News sites also generally do not offer the reach and scale available to brands on some social platforms, so an assessment of the relative pros and cons is necessary.

One thing the report makes clear is that misinformation and deliberate misinformation are rampant on social platforms, and brands that blindly advertise through them risk causing significant brand damage.

Source link

Leave A Reply

Your email address will not be published.