Two years ago, in the late New Zealand summer, a lone gunman live-streamed his first of two attacks at mosques, killing 51 people. A day earlier, he had reportedly uploaded a 74-page racist manifesto. The Wall Street Journal ran a story underscoring the danger of digital content, “New Zealand Massacre Video Clings to the Internet’s Dark Corners.”
It was a “watershed moment” in the sundry world of ad-supported digital media, says Rob Rakowitz who leads the Global Alliance for Responsible Media (GARM), an industry alliance aimed at improving consumer and brand safety, in an Internationalist Trendsetters podcast.
No brand wants to appear next to such despicable content.
In the days of the printing press, marketers and their advertising agencies bought positions inside magazines that covered specific topics. Magazine editors had complete control over content. Then marketers began buying audiences in digital media via programmatic display advertising networks, where there’s a lack of transparency, consistency and control — a kind of Wild West in publishing.
Did it matter where digital ads appeared as long as potential buyers saw them? Not really, according to the conventional wisdom at the time. This led to frenzied spending and ads landing next to hate-filled speech, adult material, conspiracy theories, spam and other nefarious content.
Fiery rhetoric has heated up, and now everything from fringe politics to animal abuse to terrorist acts have found there way into digital media. Concerns about online brand safety have raised the eyebrows of CEOs and CFOs. “It’s fundamentally changed in the last year,” Rakowitz says.
Simply put, brands can’t afford to be associated with dangerous online content, whether by accident or negligence. As life and work blend into a single digital experience, consumers and employees alike won’t put up with brands that don’t align with their mores.
CMOs are now in the hot spotlight of brand safety. Companies are turning to them to drive this conversation and work diligently with corporate affairs, procurement, agencies and platform partners to ensure that the supply chain is aligned with corporate beliefs, Rakowitz says.
Marketers stand ready to pull ads, which, in turn, has forced major digital media platforms to take action and clean up content. A new report from GARM found significant progress being made by Youtube in the number of account removals, Facebook in the reduction of prevalence, and Twitter in the removal of pieces of content. Eight out of 10 of the 3.3 billion pieces of content removed across major platforms are from three categories: spam, adult and explicit content, and hate speech and acts of aggression.
Marketers wanting to get better at brand safety should take a page out of GARM’s playbook and focus on four critical questions:
- How safe is the platform for consumers?
- How safe is the platform for advertisers?
- How effective is the platform enforcing its safety policy?
- How responsive is the platform at correcting mistakes?
“Safety for consumers and thriving societies need to be bolstered by a bright and vibrant media marketplace,” Rakowitz says.
This article first appeared in cmocouncil.org