Who Is Responsible for Biased and Intrusive Algorithms?

0

Algorithms have become part of our everyday lives. Whether one considers jobs, loans, health care, traffic or news feeds, algorithms make several decisions for us. While they often make our lives more efficient, the same algorithms frequently violate our privacy and are biased and discriminatory.

In their book The Ethical Algorithm: The Science of Socially Aware Algorithm DesignMichael Kearns and Aaron Roth, professors at Penn Engineering, suggest that the solution is to embed precise definitions of fairness, accuracy, transparency, and ethics at the algorithm’s design stage. They say algorithms don’t have a moral character. It is we who need to learn how to specify what we want.

In a conversation with Knowledge@Wharton, Kearns and Roth – who are the founding co-director and a faculty affiliate respectively of the Warren Center for Network and Data Sciences — discuss developments in the field.

An edited transcript of the conversation appears below.

Knowledge@Wharton: What prompted you to write about ethical algorithms? Why is it a critical issue? Can software really have a moral character?

Michael Kearns: We are longstanding machine learning researchers, especially on the algorithmic side. We have watched as the field has grown and found more applications and the consequent collateral damages that result from that, such as violations of privacy and discriminatory models. We watched society and media become increasingly alarmed about these instances and suggesting regulatory and  legal solutions. We think these are necessary, but we also knew that we and others were working on making the algorithms better. If an algorithm has known privacy or fairness issues, you could redesign the algorithm so that these problems don’t happen in the first place. We wanted to try to explain to a lay audience what was going on in the field and what the costs of implementing fairer or more private solutions are.

Aaron Roth: It’s not so much that algorithms themselves have a moral character. But rather, that we have started replacing human decision makers with algorithmic decision-making procedures in parts of decision making pipelines, and we’re doing this, by the way, at a large scale. Human resources departments now use algorithms to guide both hiring and compensation decisions. Lending institutions make credit decisions against the basis of algorithms. In Pennsylvania, both parole and bail decisions are informed, in part, by algorithms. When human beings reside in decision making pipelines, there are various social norms that we expect them to respect. We expect them to be fair. We expect them to have some kind of due process. We want them to protect privacy.

The problem is the algorithms that we are putting into these pipelines are not old-fashioned hand coded algorithms. Instead, these are the output of machine learning processes. And nowhere in a machine learning training procedure is a human being sitting down and coding everything the algorithm should do in every circumstance. They just specify some objective function. Usually it’s some narrow objective function like maximizing accuracy or profit. And then they let some optimization procedure find the model that best does that. The algorithm you get at the end is really good as measured with respect to the narrow objective function. But what we’ve seen in the last decade is that there are often unanticipated and undesirable side effects, like being discriminatory, for example. So, it’s not that the algorithms have a moral character, but that we need to learn how to specify what we want, not just in terms of our overall objectives, but also how to tell the algorithms to be fair, to be private.

Knowledge@Wharton: As you researched your book, which examples regarding algorithmic bias surprised you the most?

Kearns: I don’t think that we were particularly surprised by any media story that we’ve heard, or at least not surprised that technically these things could happen. What surprised me most was that even the people deploying these algorithms sometimes would be surprised that these things could happen. And often I was surprised by the scale of the damage done or the stakes involved.

A recent paper by some of our colleagues spoke about a company that builds a model for predictive health care, deciding who needs treatments of various kinds. Since assessing overall health is complicated, expensive and not fully objective, they used health care costs as a proxy. Money is easy to measure. It’s easy to see how much a person’s health care has cost over the last five years, for instance. It turned out that when they trained the model to equate health care costs with health, they discriminated against minority and disadvantaged groups who the model falsely learned were less in need of care because they had cost less. But they had cost less only because they had poorer access to health care.

If you work in the field, especially the corner of the field that tries to address these ethical issues, you would know from the beginning that you should be very worried about training your algorithm on this proxy objective rather than the real thing that you care about, which is health. So I wasn’t surprised by what had technically happened in this case, but I was surprised that it was being used in many large hospitals.

Roth: If you go back to 2016, there was an exposé by ProPublica [a non-profit organization]which has gotten a lot of attention since then. It played a big part in kicking off a lot of the academic study. What they investigated were commercial recidivism prediction tools that were used to inform parole decisions, in this case, in Broward County in Florida. These tools would try to predict the risk that a particular inmate, if paroled, would commit a crime again within the next 18 months. This was given as a piece of information to judges when they were deciding whether or not to parole inmates. What ProPublica found is that if you looked at the false positive rates of these tools, the rate at which people who ultimately would not go onto commit crimes were mistakenly labeled as high risk, they were much higher among African-Americans than among Caucasians.

“Machine learning algorithms are only trying to find patterns in the data you give them. There is no reason to think they’re going to remove biases that are already present in the data.”–Aaron Roth

ProPublica wrote a widely distributed expose about this saying that the COMPAS tool was unfair. The company that built this tool responded by saying, “We’ve thought carefully about this and our algorithm is fair.” But what they meant by “fair” was that their tool was just as accurate on both populations. That it was equally well calibrated. This back and forth which took place in the media spurred a couple of academic studies which ultimately showed that it was mathematically impossible to simultaneously satisfy the fairness notion that the company had in mind and the fairness notion that ProPublica had in mind. They were fundamentally at odds with one another. This was a big kick-starter to the academic work in this area.

Knowledge@Wharton: Where does the source of bias lie? Do algorithms tend to discriminate not because the software developers are biased but because the underlying data that is used to train the algorithms might have some problems?

Kearns: That’s right. You build complicated neural networks trained on huge amounts of data requiring many CPU cycles. You have some training data set and you have a very principled algorithm for searching for a model that minimizes the error on that training set. The algorithms used by practitioners in machine learning are quite transparent. They’re not complicated. They’re short and simple and they’re encoding a scientific principle. The problem is with the scientific principle in the first place, not with the people implementing it or the algorithm implementing it.

The problem with the principle is that when you pick an objective and you optimize for that objective, you shouldn’t expect to get anything else for free that you didn’t specify. If I optimize for error, I shouldn’t expect that two different racial populations in my training data will magically be treated the same by the model that I produce. When you’re training very rich models like neural networks, if a small fraction of a percent of error can be squeezed out at the expense of racial discrimination, that algorithm is going to go for it. To fix this, you need to modify the objective function to balance error with other considerations like fairness or privacy. If I want more fairness from my models, it’s going to cost me something, like accuracy for instance, because I’m essentially constraining what was before an unconstrained process.

Roth: The source of bad behavior is not some malintent of a software engineer and that makes it harder to regulate. You have to figure out the source of the problematic behavior and how to fix it. One of them, as you said, is bias that’s already latent in the data. Here’s an example reported in the media. A HR group at Amazon was developing a resume-screening tool. The idea was not to automatically make hiring decisions, but to take a first pass over all the resumes submitted to Amazon, maybe pick out one in 10 and send them to human hiring managers. The tool was trained on past hiring decisions made by those same human hiring managers at the company. What they found — fortunately they discovered this before deploying it — was the tool was explicitly down weighting resumes that had the word “women” in it or that included the names of several women’s colleges. Nobody did this intentionally, but this was somehow predictive of decisions that human hiring managers had made at Amazon before. This is not surprising because machine learning algorithms are only trying to find patterns in the data you give them. There is no reason to think they’re going to remove biases that are already present in the data.

But that’s not the only problem. The impossibility result that I alluded to earlier — that it’s not possible to simultaneously make mistakes in the harmful direction for two populations at equal rates, and be equally accurate on those two populations — that’s a mathematical fact whether or not there is any bias latent in the data. So even if you somehow solve the data collection problem, you’re not done. You have to think about how to solve these big optimization problems in a way that trades off all sorts of other resources, things that you care about other than accuracy.

Knowledge@Wharton: One of the stories you tell in your book is about privacy violations regarding medical records. Could you talk about that and what it teaches us about the importance of safeguarding what is private and sensitive information?

Kearns: The State of Massachusetts, while wanting to protect the privacy of individual medical records, also wanted to use people’s medical histories for scientific research, to find better treatments, better drugs, and so on. It announced that they’d figured out a way of allegedly anonymizing people’s medical records by redacting certain fields, not including your name and your address, for instance, and maybe only the first three digits of your zip code. They said this should reassure everybody whose medical records are in these databases that it’s safe to use them for scientific research or even to publicize them.

Latanya Sweeney, who was then a graduate student, was skeptical about this. She took data sets that had been made publicly available by the state, and by combining that with other things that she was able to get from other publicly available data sources like census types of data, she was able to reidentify specific individuals in the data set, in particular William Weld, governor of Massachusetts.

We use this example to argue that notions of privacy that are based on anonymity, which include virtually all of the privacy policies in commercial use today, are fundamentally flawed. The reason is that they pretend that the data set that’s in front of us is the only data set that exists in the entire world. But that is not true. Once I start combining and triangulating and linking many different data sources, I can reidentify particular individuals in the data set. Lots of things that you might think are innocuous and irrelevant and aren’t particularly trying to hide might be highly predictive of things that you wouldn’t want to make public.

Some years ago researchers showed that just by looking at the likes of Facebook users — using only that data and no other data, no demographic data about you, nothing about you and your friends on Facebook or what you post — just the sequence of things that you liked, using machine learning one could predict to statistical accuracy people’s drug and alcohol use, their sexuality, whether they’re the child of divorced parents and many other things.

Latanya Sweeney, who is now a professor at Harvard, helped kick off study of data privacy. For a long time, solutions to data privacy problems were ad hoc. Latanya’s study showing that you could reidentify William Weld’s medical record demonstrated that anonymization is not enough.

Knowledge@Wharton: What could have been done to protect the privacy?

Roth: For a long time, people tried to answer that question by responding to the most recent attacks. And then more clever attackers would come along and show that what was done was still not enough. The big breakthrough happened in 2006 when a group of mathematical computer scientists proposed what is now known as ‘differential privacy.’ Differential privacy requires that there should be no statistical test that can distinguish better than random guessing whether the results of a study were arrived at by looking at the entire data set as it is, or by looking at the data set that had your data removed, but was otherwise identical.

The reason why something like this corresponds to privacy is that if a study is conducted without having access to your data at all, then you can’t say it violated your privacy. So, if there’s no way for anyone to tell whether a study used your data or not in a rigorous mathematical sense, then we should also say that this corresponds to some guarantee of privacy. This definition also has a knob that can dial up or down how much privacy you want. The definition says there should be “almost” no way of distinguishing whether your data was used or not. That’s a quantitative measure. Once you have a definition of what you want out of a guarantee of privacy, you can start to design algorithms and reason about tradeoffs.

Knowledge@Wharton: Could you talk a little bit more about how algorithms become unfair and who is harmed when that happens?

Kearns: It happens by omission, in a way. I learn some complicated model using training data from, let’s say, individual citizens or consumers. I don’t mention anything about fairness. I don’t want false rejection rates to differ wildly between white and black people. If I don’t say that, I’m not going to get it. I should expect there will be some disparity. And it often goes in the direction against racial minorities, for instance. To fix this you need to identify the problem. You need to figure out the right definition of fairness. So, I first need to identify what group or groups I’m worried about being harmed. I then need to decide what constitutes harm to that group. And then I have to literally write that into my objective function. Just as with differential privacy there’s a knob here too. I can specify that disparity be zero percent, one percent, 10 percent. So, you can literally sweep out a quantitative tradeoff between error and unfairness.

“Lots of things that you might think are innocuous and irrelevant and aren’t particularly trying to hide might be highly predictive of things that you wouldn’t want to make public.”–Michael Kearns

One serious flaw of these definitions is that they’re only providing guarantees at the aggregate level, to the group and not to the individuals. For instance, under this definition the neural network has learned to equalize the false rejection rates between black and white people, but it’s not saying that those rates are zero. In machine learning you’re always going to have some error; you’re going to make mistakes. It’s statistics.

There is a smaller and more recent body of work trying to provide stronger definitions of fairness that give guarantees at the individual level. The problem though is you can’t go to the logical extreme of treating everybody as being in a group by themselves, because then you need to either be correct on everybody or make a mistake on everybody. The first is unachievable, and the second is undesirable. The other hard part that is outside of the boundaries of science is, who are you worried about being harmed? Why are you worried about them being harmed? Are you protecting them for some contemporaneous reason? Are you protecting them to redress a past wrong? All these topics quickly bleed into very thorny issues.

Knowledge@Wharton: One extreme example that comes to mind is the so-called “trolley problem” which has emerged in a completely new form because of autonomous vehicles. If an autonomous car loses control and the algorithm must choose between saving the lives of pedestrians or the driver, how should this dilemma be resolved?

Roth: There’s been a lot of focus on the trolley problem in large part because it’s dramatic. But I think focus has been misplaced. It would be extremely rare that an autonomous vehicle would find itself in a position where it needs some moral solution to the trolley problem. Even with autonomous vehicles, the same kinds of seemingly more mundane issues that you think about when you’re training simple statistical models are really the more important ones.

Let’s go back to thinking about simple statistical models and who they harm. Michael mentioned often they harm the minority. The reason for that is pretty simple. There are few data points for a minority population. And so, it’s often the case that different populations have different statistical properties. If the model can’t fit both of them simultaneously and you ask it to minimize its overall error, it’s just going to fit the majority population because the majority population contributes more to overall error. Let’s think about that in a self-driving car context. Driving in a city is very different than driving in a rural wooded area. It’s quite likely that self-driving cars will get a lot more training data about well populated areas because that’s where the people are. So, error rates for the predictive models in self-driving cars, which might translate now into rates of accidents, are going to be better optimized in populous regions of the country.

That’s the kind of fairness and maybe moral issue that you might eventually start to worry about with self-driving cars, which is okay, maybe we deploy this new technology and it has a lower rate of accidents overall.

Similar issues come up with automated medical decisions — we deploy some new automated method for predictive medicine that’s got lower patient mortality rate overall. But you can start to dig deeper and ask, the mistakes that these models do make, how are they distributed? You often find that they’re not distributed uniformly. And even if these models are lowering the rate of errors of accidents, compared to the human status quo overall, it might be causing more harm than the status quo in certain populations, in certain segments. I think that kind of thing is going to be much more consequential than the trolley problem.

Kearns: Something that’s equally a parlor game as the trolley problem, but perhaps is worthy of more serious contemplation, is whether there are certain types of decisions that we don’t want algorithms to be making for moral reasons, even if they can make them more accurately than human beings One example in this category is automated warfare. Maybe we are already in or soon approaching a future in which targeted assassination of enemies or terrorists by drone technology in a fully autonomous mode is more accurate than human beings can provide. This would mean getting the target more often and causing fewer collateral deaths or damage. But it’s one of these things where when an algorithm makes the decision it changes something about the moral character of the decision. And maybe sometimes we don’t want algorithms to be doing that, even though they’re better at it than us.

Knowledge@Wharton: In the book, you talk about a new acronym called FATE. Could you explain what that stands for and how this kind of thinking is being used in different industries as people try to weigh the tradeoffs between fairness and accuracy?

Roth: FATE stands for fairness, accountability, transparency, and ethics. As far as I know, it was coined by the Microsoft Research Lab in New York City. These are some of the main social norms that people worry are going to go out the window as we start replacing human decision makers with algorithms.

“The big breakthrough happened in 2006 when a group of mathematical computer scientists proposed what is now known as ‘differential privacy.’”–Aaron Roth

We already talked about fairness. But people also worry about accountability and transparency. One of the things that people like about human decision makers is that if the decision doesn’t go your way, if you’re denied a loan, for example, in principle you can ask them “How come? Why did you make that decision?” You can ask them about process. Sometimes when people talk about fairness they don’t mean just about outcomes, but also about the process. If there was some malfeasance, if the decision didn’t go your way because someone didn’t do their job properly, then often there’s a way to figure out who that was and ask for some kind of accountability. But the models that are the outcome of machine learning pipelines can often be inscrutable. You might still want to ask, “Why wasn’t I given a loan?” And it’s not clear anymore what the right answer for that is. I could tell you what objective function I optimized and what data set I used. I could tell you the vector of a billion numbers that specify the model that denied you a loan, but it’s no longer clear whose fault it was. These are big, important questions that we’re only beginning to grapple with.

Knowledge@Wharton: What about the regulatory environment. How do you see this evolving in the U.S. in contrast to Europe or Asia? China is emerging as a major superpower in AI. Do they think similarly about these issues as the Western countries do? And if so, what should we expect?

Kearns: I think regulators need to demand greater access to the actual underlying algorithms, models and data being used by large tech companies, and algorithmically audit those pipelines for unfairness, for privacy violations, etc. At present, when bad things happen on a big scale, the regulators clean up in the sense of handing down a big fine and saying, “Okay, don’t do this again.” Instead, regulatory agencies need to become more data and algorithmically oriented themselves so that they can monitor misbehavior on an ongoing basis. Take Wall Street, for example. Compared to the tech industry, it is much more highly regulated. The regulators have data from the exchanges. They have data that no other party on Wall Street has and they use that data to spot certain kinds of trading behavior like insider trading that is illegal. They are under the hood of the system and monitoring what’s going on in those systems from the inside on an ongoing basis. This is very far from the truth in the tech industry right now. The tech companies, the big tech companies, they’re all huge consumers, users, developers, researchers of machine learning. But the regulatory agencies have no authority to say, “We want to come in and we want to look at your data and we want to look at your models.” This needs to change.

Regulators in Europe have more power and leverage than they do in the U.S. but they still don’t have this level of regulatory oversight. Even the GDPR (General Data Protection Regulation), which looks very strong on paper doesn’t have any teeth. It pushes around words like privacy and transparency without anywhere saying what these should mean. I know very little about the regulatory environment in China. They’re clearly a very big industry player in machine learning and are using it in governmental surveillance ways that are unique to China. I don’t know where that will lead. It’s certainly concerning to people like us.

Roth: I would echo the need for regulatory agencies to become technically sophisticated and hire people who are conversant with the science. For example, one of the most important privacy regulations in the U.S. is HIPAA (Health Insurance Portability and Accountability Act), the law regulating how health records can be shared. HIPAA was written back in the days when people were still thinking in terms of anonymization. We now know that this approach doesn’t work. But unfortunately, it is baked into HIPAA. If you want to release data under the Safe Harbor provision of HIPAA, you’re supposed to de-identify it and remove unique identifiers like names and so on. That’s both not a very good way to protect privacy and it limits the way in which you can use that data. For example, you can no longer see how anything correlates with any of the identifiers that you removed. This is hard to change because laws and regulations are sticky. You’d have to get Congress to pass a new law.

On the other hand, census is also required to protect privacy, but the regulation governing census doesn’t say what that means. In this case that was good because the Census Bureau was able to bring in scientists who knew what they were doing. The statistical products released as part of the 2020 U.S. Census will be protected with differential privacy. That’s possible because census is regulated in the sense that they are required to protect privacy, but in such a way that they were able to rely on expert opinions. They were able to hire a chief scientist who could correctly interpret the science and figure out how to do that, as opposed to being stuck with out of date regulations.

Knowledge@Wharton: In your book you mention that if algorithms are badly behaved, maybe the solution is to have better behaved algorithms. Could you share any examples where companies have used this approach of self-regulation to solve these issues?

Kearns: It’s very early days. Virtually all the large tech companies are quite aware of the science we’re discussing and many of them have large and good groups of internal researchers who study algorithmic fairness, who study differential privacy, and have made serious contributions to the scientific literature on those topics. But for the most part, those groups tend to be in research divisions of the company. It’s harder to see these ideas actually make it into products and services.

“Regulatory agencies need to become more data and algorithmically oriented themselves so that they can monitor misbehavior on an ongoing basis.”–Michael Kearns

There have been some exceptions. For instance, Apple was one of the first companies to introduce differential privacy into one of its products. The later versions of iOS, the operating system for iPhones, iPads and the like, uses differential privacy to report to Apple statistics about app usage on devices that are protected by differential privacy, but in a way that allows Apple to make inferences that are quite accurate about aggregate population wide app usage, without letting them figure out precisely what you’re app usage is. So we’re seeing an increased rate of uses of differential privacy. What Aaron mentioned about the 2020 U.S. Census is kind of a moonshot for differential privacy. They’re still working through a lot of the engineering details.

Roth: There haven’t been any big announcements regarding fairness, but I know that Google, LinkedIn and Spotify have teams thinking about these issues internally. I’m sure that products are being audited, at least, with respect to these simple statistical fairness measures. But I’d say it might be a little too early. It is 15 years after the introduction of differential privacy that it’s starting to be a real technology that’s ready to be put into practice. With fairness there is so much we don’t understand. What are the consequences of imposing these different fairness measures, which are mutually inconsistent with one another? I would be skeptical of attempts to rush in and put them into practice today.

Knowledge@Wharton: Many CEOs and corporate boards worry that they could face lawsuits if they ignore the risk of algorithmic bias. What would be your recommendation to them to ensure that the algorithms in their companies are ethical?

Kearns: My suggestion would be simple. Have people internally that think about this, and not just from a regulatory, HR, policy standpoint. Have scientists and engineers who know this area look at your use of machine learning, your algorithm development pipeline, and be vigilant about making sure you’re not engaging in bad privacy practices, in discriminatory training. There is a gulf between people who care about these topics but don’t have quantitative training, and people that care about these topics and have the quantitative training. In many companies even when the two groups are present, they’re relatively well separated. It is time that computer scientists and statisticians and others who are part of building these algorithms and models have a seat at C-level discussions about these things. Otherwise, you’re going to get it wrong.

Roth: What you don’t want is have a complete product ready to ship and then show it to, for example, the legal teams looking for discriminatory outcomes, because by then it’s often too late. If you want to make sure that your learning algorithms aren’t vulnerable to legal challenge because of privacy issues or fairness issues, you have to design them from the beginning with these concerns in mind. These are not just policy questions, but technical questions. It’s important to get people who understand the science and the technology involved from an early stage.

Knowledge@Wharton: Are there any other points that you would like to talk about?

Kearns: Everything we’ve talked about so far covers roughly the first half of the book. I think it’s fair to a first approximation to think about violations of privacy or discriminatory behavior by algorithms as situations in which individuals are victimized by algorithms. You might be denied a loan even though you are creditworthy, and you might not even know that an algorithm made that decision and it’s causing real harm to you.

In the second part of the book, we look at situations in which the users of an algorithm or an app themselves might be complicit in its anti-social behavior and it’s not so easy to lay the blame exclusively on the algorithm. The right quantitative tool to think about these problems is game theory and economics. It has to do with settings where an app, for instance, is mediating all of our competing preferences.

The cleanest examples we use is commuting and navigation apps like Waze and Google Maps. On the one hand, what could be better than an app that has real time traffic information about what everybody else is doing on the roads and you plug in your source and your destination and it optimizes your driving time? If you’re a game theorist and you look at that app, you know there’s a name for what it’s doing. It is computing your best response in a multi-player game in which the players are all of the drivers on the road. It is a competitive game, right, because if Aaron and I are driving, we are creating a negative externality for you in the form of traffic. Your selfish preference would be that nobody drives except you and the roads are always entirely clear. You might ask, “Well, how can this be a bad thing? How can it be a bad thing that we are using these apps which help us selfishly optimize our driving time in response to what all the other drivers are doing?” Anybody who knows game theory will know that just because we’re at equilibrium doesn’t mean it’s a good thing. There are concrete examples where our aggregate or collective driving time can actually increase due to the use of apps like this.

Or, let’s take news feed on Facebook. You can think of that algorithm as optimizing for each of us selfishly. It’s trying to maximize engagement. This means it wants to put into our news feed posts that we find most interesting. News articles that we’re likely to click on because we agree with them rather than stuff that we find uninteresting or disagree with. Like Google Maps and Waze, news feed is selfishly optimizing for each of us. From an individual perspective what could be better than hearing from the friends that you want to hear from and reading articles that you find interesting? But this has led to a debate about whether this has essentially put us in a bad equilibrium, an equilibrium where individually we’re all myopically happy, but it’s led to, perhaps, a less deliberative democratic society due to polarization.

We think that there are maybe not algorithmic solutions to this problem, but algorithmic improvements that could be made. These would come at some costs, perhaps, to engagement or profitability, but the benefit might be a more tolerant, deliberative society.

This article first appeared in www.knowledge.wharton.upenn.edu

Seeking to build and grow your brand using the force of consumer insight, strategic foresight, creative disruption and technology prowess? Talk to us at +9714 3867728 or mail: info@groupisd.com or visit www.groupisd.com

About Author

Comments are closed.