The concept of brand safety has become one of the top priorities for advertisers in recent years . Brands clearly don't want ads they pay a lot of money for to appear next to offensive or dangerous content. The problem is that this is not an exact science, despite the help of algorithms.
The most common tool is probably a keyword blacklist , which prevents ads from showing next to content containing certain terms. This is easy when it comes to insults or swear words (no need to give examples), but what about, say, the word “Nazi”? It might be in an undesirable context for a brand, but it could also appear in content about the history of World War II, right?
It is precisely this kind of nuance that, according to a recent study, is missing from the implementation of major keyword blacklists, resulting in publishers failing to generate revenue from perfectly safe content .
57% of neutral or positive stories are being incorrectly flagged as unsafe for brands
Brand safety company CHEQ has investigated keyword blocking used in 225 articles on 15 major websites, including CNN , the New York Times , The Guardian and the Wall Street Journal . It did so in a single day, July 19, 2019, and found that 57% of neutral or positive stories are being incorrectly flagged as brand unsafe.
The most common keywords that caused an article to be blocked were death, injury, lesbianism, dying, gun, sex, shooting, and alcohol. But what was missing was context . They found that one in five articles that included the word “death” was perfectly safe. And, for example, a 1,700-word article from Bleacher Report was blocked because it mentioned that an Arkansas Razorbacks player had an ankle “injury.”
LGBTQ websites like PinkNews algeria phone number and The Advocate are also flagged as unsafe for a brand. According to PinkNews editor Benjamin Cohen: “In the open market, basically a ton of our content is blocked for no legitimate reason . A lot of ad networks block content for the word ‘lesbian’ because they vaguely think lesbian equals porn. I don’t think the problem is the brand, it’s whoever is managing their block list, because often brands are really surprised when you send them the block list.”
“The collateral damage is that LGBTQ content creators have a hard time monetizing”
CHEQ CEO Guy Tytunovich says that as a result, LGBTQ content is being denied the ability to generate ad revenue. “This is not done maliciously. It happens because many verification companies do not have the technological ability to distinguish between positive LGBTQ content and potentially negative content like pornography or hate speech,” says Tytunovich. “They often ‘play it safe’ by blacklisting LGBTQ-related terms, and the collateral damage is that LGBTQ content creators have a hard time monetizing.”
As culture becomes more diverse, brands are becoming more cautious . According to the Wall Street Journal , in the second quarter of this year, the number of advertisers working with ad measurement company DoubleVerify that blocked ads from running on news or political content was up 33% from 2018 and more than double the 2017 total. Integral Ad Science said the average number of keywords the company’s advertisers blocked in the first quarter was 261, with one advertiser blocking as many as 1,553 words.
Not only does this type of keyword-based brand safety prevent marketing from reaching some of the exact audiences they're paying big bucks to reach, but in the case of LGBTQ sites like PinkNews and The Advocate , it actually silences the voices of these communities , even if unintentionally.
Tytunovich says advertisers should demand that simplistic keyword blacklists be discontinued. “This will put more pressure on the industry to adopt smarter technology that can understand content contextually and make informed decisions, rather than just bluntly blocking