In 2020, many of us were inundated with well-intentioned social good marketing campaigns. But audiences of publishers talking about race, gender, sexuality or even coronavirus were less likely to see these ads. It emerged that over-zealous brand safety protections disproportionately restricted ads from web pages deemed risky, which also deprived marketers of historically underserved audiences. The blocklist scandal’s now well known but The Drum asks experts if that was just the tip of the iceberg.
Big brands have big marketing budgets to plaster ads across an almost incomprehensible amount of web pages. On the open web, they typically side-step porn and piracy domains. But in split-second bids, it is audiences, not environments, that are bought. So, brand safety vendors, with complicated machine learning tech, try to understand what pages are ‘safe‘ to place an ad on.
But safety can be a nebulous concept on the web. While the Global Alliance for Responsible Media (GARM) has defined the ‘floor‘ of brand safety – including the essential-to-avoid subjects such as pornography, weaponry, crime, death, hate speech, illegal drug use, terrorism – there‘s a huge chasm between expectations and reality. And if humans can scarcely agree on brand safety, how can a machine police it?
Brand safety tools
Atkin says: “A bad-faith interpretation of brand safety has led to minority media being blocked while white nationalists get a hall pass. The technology is bad, and the logic behind the technology is bad.”
In one newsletter, the pair noted that Mastercard was blocking itself from content about racism. Any brand that stands against racism should be funding quality media discussing it, they argued. But, this cognitive dissonance was fairly typical. Brands tread carefully in news and brand safety vendors claim that advertising against bad news can damage brands.
One example is Lisa Utzschneider, chief executive of brand safety vendor Integral Ad Science (IAS). Early last year, she urged against blocking the entire news category, especially coverage of the coronavirus pandemic. Her call was necessary: blunt-force blocking reportedly cost the UK news media £50m in ad revenue in just a few months. Instead, she urged brands to use their tech to scan the “sentiment of the content, and segment out content that’s more positive or hero-related”.
But 2020 wasn’t exactly a bumper year for positive news; such a strategy would still demonetise important reporting. Vendors were quick to remind marketers that they were in control of the tools, and may not have been using them as deftly as they could.
Atkin believes there is a false narrative in place. “No marketer who we have spoken to would agree that news is dangerous. It‘s ridiculous. No brand safety crisis has occurred because of that.“
With the human costs of misinformation in sharp relief following the final weeks of the Trump administration, she believes the focus should be on cutting out hate speech and disinformation vendors. But even agreeing on a list of agreeable sources is difficult. The Daily Mail may be one of the UK’s most-read news sites, but it has many critics and is no longer listed in Wikipedia as a credible news source. Meanwhile, few media owners came out in good light after the footage of the Christchurch massacre were published by tabloid titles in 2019; some even ran ads (albeit briefly) against the content.
When the mainstream media slips up, the mistake is visible, and they are criticised if not always held to account. But what about further down the digital pecking order where there’s zero editorial rigour?
Atkin says: “We did an ad audit for a Fortune 500 company, and our feedback from one of the contacts that did the presentation to the VP of marketing was ‘our ads are showing on the asshole of the internet’”.
As for the sentiment analysis solution being posited by vendors. “It is completely the wrong direction for this and they‘re overcomplicating it.”
How the tools work
Stevan Randjelovic is a director of brand safety and digital risk, EMEA at one of the world’s largest media buying agencies, GroupM. The group considers brand safety as “any area of digital risk right, including wherever you can lose either money or reputation, or you can be legally hurt”.
He says discussion has been muddied. Brand safety offers baseline protections but brand suitability is about preferred placements. It would be unwise for an aviation or travel company to advertise next to a plane crash news story. Meanwhile, family-facing FMCGs may be more uncomfortable with profanity than other sectors.
“Different brands have different sensitivities,” he says. Some have been called out for awkward adjacencies and the funding of certain topics. Jammi, formerly of Sleeping Giants, has had a big part in calling out brands funding disinformation and hate, for example.
Randjelovic says: ”Brands want to keep out of headlines but the brand safety discussion has grown into one of social responsibility. They want to fund ethical and appropriate content.”
So how does their aspiration fall so far from the reality?
For Randjelovic, marketers have to remain vigilant. “No tool is perfect, far from it. There are issues telling the difference between hate speech and somebody just being mean, sometimes even humans can’t tell the difference. But the tech is helpful.”
There needs to be transparency about where the ads run. Delivery reports need to be checked. Instances where the tech made the wrong decisions need addressed. Keywords and buys need to be optimized. Those using the tech have to push vendors for better results.
“Brand safety efforts actually start with vetting your supply,” he says. On this, he and Atkin agree. Getting safe sites, with journalistic standards, is the starting point. Using inclusion lists to find the audiences the blocklists have inadvertently shunned helps too. Deciding that is a human’s job.
Then its techs job to check every page. “You can’t sit in the office in read every single page.” A campaign could deal in billions of impressions, after all.
Marketers appear to be awakening again to the benefits of buying quality media, although that may have more to do with privacy regulations than any love of premium environments.
Ben Pheloung is general manager for Mantis, a brand-safety tool owned and by Reach, one of the UK’s biggest newspaper groups. Running on IBM Watson’s machine learning platform, it looks to study what’s on the page in a smarter manner.
Pheloung says: “Brands blocked words like ’coronavirus’, ’shoot’, and ’attack’ without considering the context of the content.” He says some publishers believe it blocked 75% of what they deemed ‘safe’ content.
Mantis is built with the assumption marketers want to appear next to news. Atkin after all points out that brands have ads blocked on the home pages of The New York Times and The Guardian, which should “infuriate” them.
Pheloung says: “Historically, brands have certainly been hesitant to run ads on news content – some even blocking ‘news’ as a category altogether. Brands should have more concern about advertising in social media environments, where there is less control and policing of the content.”
Going deeper than keywords to understand the sentiment and emotion around the web pages is the “difference between content being blocked or monetised”. It will free up more inventory than has been historically available and will give brands more control.
Marketers are under the impression ads next to positive news stories garner less risk and better outcomes, but there’s scant evidence to back that. In fact, research by Reach suggested that an ad placement against bad news in a trusted news source would do little, if any, reputational damage.
Pheloung concludes: “There is still a level of reluctance from the buy-side to make any major changes to improve brand safety. As a vendor, it is up to us to offer alternatives to the blunt brand safety solutions.”
The next concerns?
As GroupM’s Randjelovic points out, keyword blocking only entails the post-bid activity. The pre-bid processes of brand safety vendors, as well as the buying process in general, should also be reviewed for flaws.
Chris Kenna, founder of diversity network Brand Advance, a group driving much of this debate in the UK, is enthused that positive steps have been made. But as ever, there are more concerns ahead.
With marketers now largely aware of the damage blocklists can do, they’ve adapted their tactics. But his research has seen that some LGBTQ+ or racial restrictions have been baked into the source code of vendor tech. “Even when brands remove their keywords to reach community media, they are still being blocked by the vendors who are sometimes unwilling to change their codes.” Algorithmic bias needs to be found and expunged.
He adds that the mainstream media sometimes tries to pass off its inventory as ’diversity’ media, to take advantage of the surge in interest from readers and brands. “It has to be media owned and written by and for a particular community. Mainstream media articles should not be sold as diversity media,“ he says.
Check My Ads’ Atkin has a final concern. There is a power shift afoot with Google flipping the script on third party cookies. “The demise of a third-party cookie would be fine if it didn’t immensely help Google, the web’s most prominent funder of hate speech and disinformation,” she says.
As varied as the issues are, Atkin concludes the answer lies with marketers, not adtech. ”They need to understand the benefits of spending with quality publishers.”