Facebook’s Leaked Content Guidelines: What Marketers Should Know
A leaked document published by The Guardian outlines the guidelines Facebook is using to monitor big topic issues like violence and racism.
Saying “#stab and become the fear of the Zionist,” for example, would be considered a credible threat—and Facebook moderators would be able to remove that particular content. But saying “kick a person with red hair” or “let’s beat up fat kids” is not considered a realistic threat of violence.
Similarly, videos featuring violent deaths will be marked as disturbing, but will not always be deleted because they might raise awareness about issues such as mental illness.
Clearly, there are gray areas in the way content is handled.
What the leak has done is shed light on one simple truth: Publishing mammoths like Facebook and Google (which has also experienced its share of controversy over content) can’t currently provide 100% brand safety.
At scale, user-generated content provides too great of a challenge. And this doesn’t necessarily bode well for advertisers.
“Advertisers are demanding more than what these platforms can currently provide,” said Ari Applbaum, vice president of marketing at video advertising platform AnyClip. “Until artificial intelligence solutions are robust enough to provide 100% assurance, manual screening of content is replacing AI, and it’s not sustainable in the long run.”
But while some advertisers may not be happy about the context in which their ads appear, they do have some control over the process.
“Every brand has their specific set of criteria in terms of their own limits and thresholds,” said Marc Goldberg, CEO of Trust Metrics, a publisher verification firm. “I don’t think this leak will impact Facebook’s business, but it will introduce new conversations around specific concerns and whether the company is doing enough for brands.”
Putting better content guidelines in place has been a priority for Facebook.
The company has been dealing with mounting concerns that it has been too slow to respond to problematic content posted on its platform, such as live videos of murders and suicides.
To help curb the problem, CEO Mark Zuckerberg earlier this month said he would hire 3,000 people to review videos and other posts, with an aim toward speeding responses to problem posts.
In a Facebook post, Zuckerberg said: “We’re working to make these videos easier to report so we can take the right action sooner—whether that’s responding quickly when someone needs help or taking a post down. This is important. Just last week, we got a report that someone on Live was considering suicide. We immediately reached out to law enforcement, and they were able to prevent him from hurting himself. In other cases, we weren’t so fortunate.”
This article first appeared in www.emarketer.com
Seeking to build and grow your brand using the force of consumer insight, strategic foresight, creative disruption and technology prowess? Talk to us at +9714 3867728 or mail: email@example.com or visit www.groupisd.com