FACEBOOK WANTS TO FIX ITSELF. HERE’S A BETTER SOLUTION.

0

CHALK IT UP to a New Year’s Resolution or maybe just the ongoing fallout from Russian meddling in the 2016 election, but Facebook founder and CEO Mark Zuckerberg is looking to do things a little differently this year. At the beginning of January he posted that his goal for 2018 is to “focus on fixing… important issues” facing his company, referring to election interference as well as the issues of abusive content and addictive design.

Unfortunately, it will be very difficult for Facebook or other technology platforms to fix these problems themselves. Their business models push them to focus on user and engagement growth at the expense of user protection. I’ve seen this firsthand: I led the team in charge of policy and privacy issues on Facebook’s developer platform in 2011 and 2012. And in mid-2012, I drew up a map of data vulnerabilities facing the company and its users. I included a list of bad actors who could abuse Facebook’s data for nefarious ends, and included foreign governments as one possible category.

I shared the document with senior executives, but the company didn’t prioritize building features to solve the problem. As someone working on user protection, it was difficult to get any engineering resources assigned to build or even maintain critical features, while the growth and ads teams were showered with engineers. Those teams were working on the things the company cared about: getting more users and making more money.

I wasn’t the only one raising concerns. During the 2016 election, early Facebook investor Roger McNamee presented evidence of malicious activity on the company’s platform to both Mark Zuckerberg and Sheryl Sandberg. Again, the company did nothing. After the election it was also widely reported that fake news, much of it from Russia, had been a significant problem, and that Russian agents had been involved in various schemes to influence the outcome.

Despite these warnings, it took at least six months after the election for anyone to investigate deeply enough to uncover Russian propaganda efforts, and ten months for the company to admit that half of the US population had seen propaganda on its platform designed to interfere in our democracy. That response is totally unacceptable given the level of risk to society.

Faced with withering public and government criticism over the past several months, the tech platforms have adopted a strategy of distraction and strategic contrition. Their reward for this approach has been that no new laws have been passed to address the problem. Only one new piece of legislation, the Honest Ads Act, has been introduced, and it only addresses election-specific foreign advertising, a small part of the much-larger set of problems around election interference. The Honest Ads Act still sits in committee, and the tech industry’s lobbying group has opposed it. This inaction is a big problem, because experts say that foreign interference didn’t stop in 2016. We can only assume they will be even more aggressive in the critical elections coming this November.

There are a few things that must happen immediately if any efforts to solve these problems are to succeed. First, the tech platforms must be dramatically more transparent about their systems’ flaws and vulnerabilities. When they discover their platforms are being misused or abused—like, say, for allowing advertisers to discriminate based on race and religion—they need to alert the public and the government on the extent of the misuse and abuse: something bad happened, here’s how we’re going to make sure it doesn’t happen again. No waiting around for investigative reporters to get creative.

Of course, transparency only works if everyone trusts the information being shared. Tech platforms must accept regular third-party audits of all metrics they provide on the malicious use of their platforms and their efforts against them. And third parties must also be involved in ensuring policies are enforced correctly. A recent report by ProPublica showed that 22 of 49 content policy violations reported to Facebook over several months at the end of 2017 were not handled in compliance with the company’s own guidelines. Twitter has also faced persistent criticism that it doesn’t enforce its own policies consistently. To help solve this, data protection advocate Paul-Olivier Dehaye suggests creating a framework by which users can easily route policy violations to third parties of the users’ choosing for analysis and reporting. By doing this, tech platforms can ensure that independent entities are auditing both the efficacy of their policies and the effectiveness of their policing.

Transparency itself is not enough to ensure major societal harm is avoided. Tech platforms need to accept liability for the negative externalities they create, something Susan Wu suggested in a WIRED op-ed late last year. This will help ensure they think creatively about the risks they are creating for society and devise effective solutions before harm happens.

The Russian election meddling that took place on Facebook, Twitter, and Google in 2016 was such a negative externality. It harmed everyone in America, including people who don’t use those products, and it is impossible to imagine that this propaganda campaign would have succeeded in the same form without the technology made available by Facebook, Twitter, and Google. Russian agents used targeting and distribution capabilities that are unique to their products, and they also exploited a loophole in the law that exempted internet advertising from the restrictions that prevent foreign agents from buying election ads on television, radio, or print media. (The Honest Ads Act would close this loophole.)

Where significant negative externalities are created, companies should be on the hook for the costs, just as an oil company is responsible for covering the costs of cleaning up a spill. The cost of the damage caused by election meddling is difficult to calculate. One possible solution is a two-strike rule: with the first strike, you fix the problem and, if possible, pay a fine; with the second strike, government regulators will change or remove the features that are being abused. Only with financial liability and the direct threat of feature-level regulation will companies prioritize decision-making that protects society from the worst kinds of harm.

Given what is at stake in the upcoming elections and beyond, we must not accept distraction and empty contrition in place of real change that will protect us. Only with real transparency, real accountability, and real regulation will we get real change. There is too much at stake to accept anything less.

This article first appeared in www.wired.com

Seeking to build and grow your brand using the force of consumer insight, strategic foresight, creative disruption and technology prowess? Talk to us at +9714 3867728 or mail: info@groupisd.com or visit www.groupisd.com

About Author

Comments are closed.