Approaching fairness in machine learning
blog.mrtz.orgThe biggest issues of bias/fairness in ML are not to do with the algorithms or results, but the underlying data.
A trivial example would be: what if you trained a classifier to predict whether a person would be re-arrested before they went to trial? Some communities are policed more heavily so you would tend towards reinforcing the bias that exists and provide more ammunition to those arguing for further bias in the system, a feedback loop if you would.
Or what if some protected group needs a higher down payment because the group is not well understood enough so that you can't distinguish between those who will repay your loans and who won't? Maybe educational achievement is a really good predictor on one group, but less effective on another. Is it fair to use the protected class (or any information correlated with it) when it is essentially machine-enabled stereotyping?
Recently it has been noted that NLP systems trained on large corpuses of text tend to exhibit society's biases where they assume that nurses are women and programmers are men. From a statistical perspective this correlation is there, but we tend to be more careful about how we use this information than a machine. We wouldn't want to use this information to constrain our search for people to hire to just those that fulfil our stereotypes, but a machine would. This paper has some details on such issues: http://arxiv.org/abs/1606.06121
I don't think there are any easy solutions here, but I think it's important to be aware that data is only a proxy for reality and fitting the data perfectly doesn't mean you have achieved fair outcomes.
These are not "fairness" issues; these are process feedback issues. The same problem pops up if you're using algorithmic selection of machine parts to test for failure, attempting to programmatically evaluate patches for code quality, writing fraud detection algorithms, etc.
Great points. These are exactly the kinds of issues researchers are grappling with in trying to nicely define, and attempt to achieve, fairness in ML.
One of the great things about programming a computer to do something is that it forces you to expose and make explicit every hidden assumption. "Computers are very stupid: they only do exactly what you tell them to do."
With biases about people based on immutable characteristics (sex and race), we need to be clear about why stereotypes are bad and what we hope to achieve by eliminating stereotype-based reasoning. There is a great deal of hypocrisy and pretense around this subject, but only by being explicit and unapologetic can we explain to a computer what it is we want to achieve.
Stereotypes are not bad because they are false. Many stereotypes, even negative and unpleasant ones about vulnerable minorities, are statistically true at this time. A stereotype is nothing but a certain kind of model, and indeed, models built on sex and race stereotypes may perform better than those that aren't.
Nonetheless, we have strong norms against using stereotypes in law, public life and employment, because the outcome of such reasoning would be intrinsically unjust, and because of a long history of political struggle against a society, that explicitly discriminated on the basis of race, sex, homosexuality and so on. Conservatives will disagree with these premises, but we implicitly reject a conservative, discriminatory vision of society. Rejecting oppressive and unjust stereotypes is an unavoidably political act.
We recognize that
[0] It is a category error to treat humans like other kinds of objects which can be measured, because human beings are intelligent and can alter their behavior. Telling humans that science has discovered certain facts about their behavior may well change their behavior, or even re-order society around these new 'facts'. There is no neutral ground; doing statistics on humans has ethical implications. An awful example of this was the eugenic movement that inspired the Nazis.
[1] Stereotypes can reify themselves. A society which treats women as less than men will end up as a society where women are less than men, and are systematically harmed - they will be less educated, and will get treated as less intelligent. This is a kind of positive feedback loop between the widespread endorsement of a stereotype and its being 'confirmed as true'.
[2] It's intrinsically unjust to judge individuals, especially in a negative way, based on the behavior of others. This is a matter of justice, and it overrides considerations of predictive accuracy.
[3] We live in a society that is still unjust, racist, sexist and so on. We want a society where fair and equal opportunities are given to everyone, and where everyone has a chance to escape from negative stereotypes. This overrides efficiency. In that sense, we can hope to change the nature of society by changing the nature of claims that are widely held as 'truth'.
These are sophisticated premises for rejecting stereotype based reasoning, and they come from rejecting the society-wide outcomes of treating stereotypes as truth. We know what these effects are, because we know what a discriminatory society looks like. But this is the kind of reasoning we can use to build non-discriminatory, socially just models that do not harm people simply for being who they are.
Another recent paper on this topic: http://arxiv.org/pdf/1606.08813v3.pdf. It shows how naive lending algorithms can skew against minority groups simply because there is less data available about them, even if their expected repayment rate is the same.
It can be self-reinforcing. Imagine some new demographic group of customers appears, and without any data you make some loans to them. The actual repayment rate will be low, not because that group has a worse distribution than other groups, but simply because you couldn't identify the lowest-risk members. A simplistic ML model would conclude that the new group is more risky.
Of course, smart lenders understand that in order to develop a new customer demographic they need to experiment by lending, with the expectation that their first loans will have high losses, but that in the long run learning about how to identify the low-risk people from that demographic is worthwhile. And they correct for the fact that the first cohort was accepted blind when estimating overall risk for the group.
Of course, this theory of discrimination is only applicable when minorities are fundamentally different from majorities. I.e., if the same ruleset is accurate for both whites and blacks (i.e., "I don't care about race, if he puts 20% down he's good"), this argument doesn't work at all - you can train your model on everyone and it'll work just fine.
However, if blacks and whites need to be treated fundamentally differently in order to make accurate loan decisions, then this argument applies. I.e., perhaps whites need a 20% downpayment for a loan to be financially a good risk but blacks need 40% (or vice versa).
I wonder how many people calling algorithms racist will endorse this conclusion. It sounds kind of...racist.
(Note that I don't use "racist" a synonym for "factually incorrect" or "we should not consider this idea", but merely "this sounds like the kind of thing a white nationalist might say, or Trump would be criticized for if he said".)
Yes, blacks are fundamentally different from whites in terms of the available data to train algorithms on:
http://www.nytimes.com/2015/10/31/nyregion/hudson-city-bank-...
> The government’s analysis of the bank’s lending data shows that Hudson’s competitors generated nearly three times as many home loan applications from predominantly black and Hispanic communities as Hudson did in a region that includes New York City, Westchester County and North Jersey, and more than 10 times as many home loan applications from black and Hispanic communities in the market that includes Camden, N.J.
That's of course, just recent history. Redlining that occurred in the 1960s on would be enough to adversely affect the housing history data of minority groups even today. Treating everyone equal in the eyes of the algorithm is certainly an easy route to go but as the non-algorithm expert MLK Jr. pointed out:
> Whenever the issue of compensatory treatment for the Negro is raised, some of our friends recoil in horror. The Negro should be granted equality, they agree; but he should ask nothing more. On the surface, this appears reasonable, but it is not realistic.
Did you read what I wrote? Available data on blacks specifically is completely irrelevant if blacks and whites aren't fundamentally different. The white model will generalize.
If repayment probability for blacks and whites alike is is A x downpayment_fraction + B x credit_score, you can use training data from whites and the model will accurately predict black repayment probability. It only fails if you actually need A' and B' for blacks.
As an example, maybe for whites A = 1.0 and for blacks A' = 0.75. In that case the optimal decision is to demand higher lending standards for blacks - a black person with a 40% downpayment would be treated the same as a white person with a 30% downpayment. Is this your belief?
Even in models where race doesn't directly cause an outcome, a model's judgements may be biased against a race.
For example, suppose that (1) people can be green or blue, (2) green people tend to live in Idaho, (3) living in Idaho is associated with people not paying back loans.
A linear model where there are only non-zero, positive coefficients for the path p(green) -> p(Idaho) -> p(fail_to_repay), and p(credit_score) -> p(fail_to_repay) will create trouble, even though color does not directly affect repayment. If you use a multiple regression with fail_to_repay ~ B0 + B1Idaho + B2credit_score, it will discriminate against green people, by penalizing people from Idaho.
AFAIK, one of the points of the paper linked in the parent comment is that blindly using indicators like IP address may indirectly lead to discrimination against a racial group in this way, e.g. p(racial_group) -> p(a_specific_IP_address).
Maybe more relevant to your example, though, is that assuming whites and blacks have the same model in the "ground-truth" scenario I presented could cause a model to be discriminative (when it shouldn't be, because the coefficient for the path from p(green) -> p(fail_to_repay) is 0).
This specific issue is hairy, and exists for traditional approaches also.
If I understand your model right, you are saying that Idahoans don't repay loans and your model accurately reflects this. This isn't a bias at all. The model is issuing fewer loans to green people not because they are green but because they live in Idaho and are unlikely to pay back said loans.
This is a case like what is described in the article - when a perfect predictor (another word for this is "reality" or "hindsight") will still exhibit disparate impact.
It is a bias if you calculate the cost to people taking out loans, based on color. Green people will pay a higher cost, even though in the ground-truth model their race is not directly related to loan repayment.
For example, if only blue people in Idaho fail to repay loans, green people will still absorb a greater cost in the multiple regression case above (in the sense that they are more likely to be penalized for being Idahoans).
Yes, if it's actually (blue & Idaho) ~> default, and your model ignores blue, then the greens will pay a higher cost. If color is redundantly encoded then your model can partially fix this and penalize the blue's in Idaho.
Do you consider this situation unjust? If so, you might be unhappy to learn that the entire goal of the field of algorithmic fairness is to do something along these lines.
> Available data on blacks specifically is completely irrelevant if blacks and whites aren't fundamentally different. The white model will generalize.
I should have been more clear that I was responding to this part of your comment. That even if blacks and whites aren't fundamentally different (in the sense that your race does not directly cause an outcome of interest) you can produce biases that are essentially a misatrribution about the relationship between race and that outcome. Worse, if there_is_ a relationship you can reverse the direction a model estimates for the relationship (Simpson's paradox).
> Do you consider this situation unjust? If so, you might be unhappy to learn that the entire goal of the field of algorithmic fairness is to do something along these lines.
I don't think the creation of tools to accommodate this specific purpose is bad, per se. Whether or not they are the appropriate tool to use is a different question.
OK, I guess I'm supposed to agree with you if I beg the question that "available data on blacks specifically is completely irrelevant"...? I do think that the distribution of data specific to blacks is relevant.
Ok, so now we have all acknowledged that we are "race realists" or "scientific racists" in this conversation. ( https://en.wikipedia.org/wiki/Scientific_racism )
Anyway we've now accepted blacks and whites may behave differently. For example, lets suppose we have all the training data we need to accurately recognize that one race doesn't pay back their loans as much as others, all else held equal.
What should we do about it? Concretely, how many bad loans should we issue in the name of "fairness"? How large a subsidy must the responsible races pay to the deadbeat ones?
I don't know if I nor Dr. King Jr. have to subscribe to scientific racism just because we subscribe to the reality that folks with of different racial backgrounds have a higher probability of being shortchanged historically. And thus, that any machine learning approach that doesn't factor this in will risk perpetuating such disadvantages, which kind of defeats the ostensible purpose for using machine learning to apply public policy in the first place.
Historically isn't the issue. The issue is a simple factual question of whether, all else held equal, black people repay their loans at the same rate as whites in identical financial circumstances. The fact that in aggregate financial circumstances might be different isn't important to this question.
If they do, then you don't need to worry about algorithms discriminating. Insofar as they do it's merely a sampling error (i.e. shrinks like O(1/sqrt(N)), where N = Nwhite + Nblack) and they are just as likely to discriminate in favor as against.
If they don't, then you subscribe to scientific racism, or the belief that blacks and whites in identical circumstances behave fundamentally differently.
(I describe these different cases in explicit detail here: https://www.chrisstucchio.com/blog/2016/alien_intelligences_... )
So do you believe race affects reality independent of other factors? And assuming you do subscribe to scientific racism, what should we do about it?
> The issue is a simple factual question of whether, all else held equal, black people repay their loans at the same rate as whites in identical financial circumstances
Oh if you put it that way, then I don't know. Because that's not the reality that's being dealt with, in which whites and blacks have identical circumstances. I think you're reading something into this that others aren't.
Because that's not the reality that's being dealt with, in which whites and blacks have identical circumstances.
Of course it is. There may be 5 blacks and 100 whites with a credit score of 830. But as long as blacks and whites with an 830 credit score behave the same, then data from whites will generalize to blacks and the problem tlb brought up doesn't apply. Redundant encoding is also irrelevant - this is useless information so an accuracy maximizer has no reason to pay any attention.
Insofar as blacks and whites with an 830 credit score behave differently, then algorithms might treat them differently. That's the "race realism" hypothesis.
Having the same credit score, or other data, does not mean they have identical circumstances. For example, your willingness to follow the rules of society, even to your detriment, might be a result of whether society's rules have treated you fairly in the past.
So belief that you'll get different default rates given some financial data does not imply that you are a scientific racist. To create identical circumstances you'd have to do a brain swap (and some other relevant internal organs) on some black and white infants. A scientific racist view is that then the likelihood of paying off the loans would follow the brain, not the skin.
FWIW, I present a case where race does not directly cause increased failure to repay, but common approaches to modeling could discriminate against race.
These issues have been discussed in detail in statistical considerations of Simpson's paradox. One need not accept that racial differences directly affect an outcome of interest, in order to be concerned about a model being biased against race!
I don't know what you mean by "fundamentally different" but there are definitely going to be demographic differences that the algorithm could use to predict race with good probability from hidden variables. (Where they live, for example.) History has an influence that's hard to remove from the dataset.
I'd guess that another reason this problem is hard is that it's about defining the goal correctly. It's not just maximizing repayment. There is some fairness goal that isn't well-defined.
By "fundamentally different", I mean that the most accurate model will be something like this:
for some A != 0. I.e., if A = -0.2, then a black borrower with a 60% downpayment is as likely to pay back a loan as a white borrower with a 40% downpayment.repayment_probability = 1 x downpayment_frac + 0.5 x credit_score + A x isBlackIf A = 0, then the bias described by tlb and danso won't occur.
What you describe with hidden variables is called "redundant encoding", and it's just a way of recovering the `A x isBlack` term if you remove `isBlack` from your input set. But if blacks and whites repay their loans at the same rate (holding all else equal), redundant encoding won't happen - it doesn't actually improve accuracy.
I describe this in more detail here: https://www.chrisstucchio.com/blog/2016/alien_intelligences_...
I agree with you that the core issue is an unspecified true goal. Folks are unwilling to publicly and explicitly state how many bad loans should be issued for fairness or how many unqualified students should be allowed into college for diversity.
Or for an example closer to home, how much we should lower the bar in order to hire more non-Asian minorities in tech? Daring to ask that question gets you some pretty hostile responses.
Repayment rates are not just individual - they also depend on the financial strength of friends and family who can help you out if you get in trouble. So, I think we have to assume that there are performance-relevant differences that an algorithm will detect.
Also, unless the dataset has information about families, this isn't based on your actual family. It's based on the average benefit people like you get from their family.
Lives of members of social groups can be different for historical reasons. Because of this, the best selection of features (as in, how we select and encode relevant aspects of a dataset for a particular problem) that one would need to use, as well as the correlations between them, may be different between different groups. The question is not whether there is fundamentally a difference between, in this case, racial groups vis-a-vis paying back loans (i.e. that the only feature required in your model would be "is member of this group"), but what are the traits of possibilities of life that have for whatever reason ended up leaving a quantifiable trace in our databases and their distribution within that group.
One hypothetical example: suppose that there existed a group G that was not able to go to the top n% of universities due to discrimination. Your company uses some rank of university attended as one of the features input to its favorite machine learning algorithm. However, the dataset you trained on excluded group G. Within this group, the best university individuals have been able to attend is X which is by definition not in the top n%. Had the algorithm been trained on this group it would have observed that school X is highly correlated with success in this group, even if not in the original training set used. As is, your ML system assigns a low probability to members of group G.
Issues like this will be hard to prevent. While that doesn't mean we shouldn't work hard to make real innovations in ML, I think the legal approach of a "right to explanation" as analyzed in http://arxiv.org/pdf/1606.08813v3.pdf and recently added to European law is regardless a helpful tool to ensure accountability.
Yes, if your training data excludes relevant features then you can't use them. No one disputes this.
However, once you start including such people in your training data, these issues are not hard to prevent. In fact, ML systems will often do this accidentally even when you don't want them to (when the sign of the bias has the politically incorrect direction). It's called redundant encoding.
See the section of my blog post "What if we scrub race, but redundantly encode it?" where I do calculations to show the effect of this: https://www.chrisstucchio.com/blog/2016/alien_intelligences_...
In short, if your data is biased against a group, but you include group membership either directly or via redundant encoding, your algorithm will fix the bias as best it can.
The entire purpose of machine learning is to discover hidden features and correlations in messy data, so I fail to see why this is considered surprising.
I generally consider the "right to explanation" to be a fairly transparent attempt by the EU to keep American tech companies out of Europe. The entire purpose of ML is that it can uncover true facts that humans can't. The right to explanation is just an attempt to hobble this power, probably because few Euro companies can do it.
In that case of being treated differently (requiring a different amount of downpayment as security), it's probably racist. More common and less controversial is the case when the signals are in different channels.
For instance, when dealing with immigrants, US banks often fail to see any signal at all because their credit reporting only covers US institutions, and they don't know how to verify employment or schooling abroad. So to start making loans to immigrants from any given country, they need to figure out what the signals are (job, schooling, ...) and how they correlate with risk.
> a Trump voter
Is that really necessary?
Some of us are treating the political system like a blackbox, I'm just sending a different corrupt payload at it to see what the output is.
Perhaps not. I've altered it.
Well, estimating higher risk due to lack of information, is not a glitch - rather the rational correct estimation. Say you're a complete stranger and want to hangout with me, this is pretty scary! However, if I know you, and you're a jerk - you might piss me off during the night, but at least I know you're not a serial killer...
Maybe you'd need an multi-armed bandit algorithm [1] to allow for some exploration of the dataset?
If this is the case competition will weed it out.
That's... optimistic. In the long run, maybe, but someone has to actually do it.
What is fairness but political accountability?
There is an old joke about how people use statistics like a drunk uses a lamp post: for support and not for illumination. Given this, we can expect people to use AI like everything else in statistics, to support the agenda of whoever is operating it while defraying negative personal accountability for the results, because artificial intelligence. It's just an obfuscated and sophisticated version of, "Computer says no."
The alternative is the near future headline, "AI confirms racists, sexists, on to something."
I wish I could upvote this remark twice (or more.)
This is pretty much the only important concept for figuring out how we will use this tech politically.
Because it takes genius-level intelligence to be able to figure out whether you're just telling yourself what you want to hear, and incredibly rare responsibility to remember to [keep on trying to] do so, individuals and tiny groups may be able to use AI for these sorts of things, but large groups, municipalities, states, corps, etc. never will.
The systems we can understand and manage as a group are vastly simpler than those which you can understand and manage as an individual.
Everyone suggesting that we ought to legislate that machines must be illogical/suboptimal is missing the point.
If machine learning algorithms are unfairly discriminating against some group, then they are making sub-optimal decisions and costing their users money. This is a self-righting problem.
However, a good machine learning algorithm may uncover statistical relationships that people don't like; for example, perhaps some nationalities have higher loan repayment rates. In these cases, the algorithm is not at odds with reality; the angsty humans are. If some people want to force machines to be irrational, they should at least be honest about their motivations and stop pretending it has a thing to do with "fairness".
This is a great point. People believe that most groups are basically equal; this is true in the sense that if people were raised in identical environments with equal opportunities than it probably wouldn't really matter what group they were in, but wrong because that isn't the world we live in. Different groups on average experience much different environments. Machine learning doesn't care why the differences in groups arises, but people do. Fundamentally the question is whether we want to base our decisions based on how the world is, or on how we want the world to be.
It comes down to a choice between equality of opportunity versus equality of outcome (or some mix of the two). You can't have both - granting equal opportunities will result in unequal outcomes for all kinds of fair and unfair reasons; and ensuring equal outcomes requires unequal opportunities (e.g. quota systems).
For unfair stereotypes it's simple, you just ignore them; but there will be some group differences that are real - it would be a mighty coincidence if so many so diverse groups would magically happen to be identical in all aspects.
So it's up for the society to decide what to choose what we will do if it turns out that, other observable factors being equal, race/religion/ethnic background/etc X actually is 10% more likely to default on a loan.
I keep making essentially the same point about race/gender discrimination in tech. If group X is as effective as group Y but you can get away with paying them 20% less, why would you NOT hire group X? There's no corporation that's so racist or sexist that it'll turn down saving 20% on payroll.
The issue here isn't that machine learning gives wrong answers, it's that our definition of 'fair' is irrational.
>If group X is as effective as group Y but you can get away with paying them 20% less, why would you NOT hire group X?
Hypothetical possibility: members of group X are not perceived as 100% as effective as group Y because of pervasive bias by the employers that assumes their incompetence. They are generally perceived to be 80% as effective as a standard Y member despite actual 100% performance, and paid accordingly. A member of X needs to be 120% as effective as a Y member to be perceived at 100% Y efficiency because of stereotypes coloring their perception and an inability to objectively evaluate their performance.
Some non-hypothetical studies touching on this:
http://www.nber.org/papers/w9873.pdf http://www.pnas.org/content/109/41/16474.full.pdf+html http://advance.cornell.edu/documents/ImpactofGender.pdf http://www.socialjudgments.com/docs/Uhlmann%20and%20Cohen%20...
Ideally, management would just look at the numbers at some level and figure out if there was some measurable pay disparity they could arbitrage and make money off of. I'm sure some companies have. This is a benefit of impersonal, faceless corporate structures; they don't have human qualities like biologically motivated bias in judgement. On the other hand, they don't have qualities like empathy either, so it's not clear if it's preferable or not.
Possible. But there's still potential problems with that, tying in to the article's main issue of potential feedback loops/bias in ML algorithms. Let's assume pay is correlated to perf/job title, and members of group X are consistently rated 80% of what a member of group Y would earn for identical performance by unintentionally biased managers. Let's assume that they're all similarly 80% as likely to be promoted given identical performance. Anyone looking at the data would find that pay for X and Y members is fair given their perf scores/job titles, and that members of X tend to underperform compared to Y. They could suspect bias in perf from that, or they could conclude that members of X are fairly paid but statistically underperforming. An objective evaluation of a biased/unfair dataset doesn't necessarily guarantee a fair/objective outcome.
I think you're right and obviously making the machine make suboptimal decisions is definitely not a good solution.
However I think a case can be made that certain protected attributes should be censored. Not to prevent the algorithm from making optimal decisions, but to prevent it from overfitting on those attributes. Which, if you think about it, is essentially what discrimination is.
If the algorithm is overfitting, it's costing its users money in the general case. Again, self righting. We don't need to hide the data we think it's overfitting on; any modern production ML system shouldn't have trouble with extraneous data. You don't see loan bots denying loans to people because they're named "Phil", for example, even though the bots have that information.
(Good) ML algorithms don't suffer from human biases; they don't know that there's a categorical difference between e.g. race and shoe size, so we don't need to hide race from these algorithms. That is, of course, unless one's explicit goal is to cripple and pessimize the algorithm for political reasons.
After studying this issue, and learning a lot more about learning and optimization, I've come to the conclusion that the best solution [1] is probably explicit racial/sexual/other special interest group quotas.
Specifically, we should train a classifier on non-Asian minorities. We should train a different classifier on everyone else. Then we should fill our quotas from the non-Asian minority pool and draw from the primary pool for the rest of the students.
As this blog post describes, no matter what you do you'll reduce accuracy. But every other fairness method I've seen reduces accuracy both across special interest groups and also within them. Quotas at least give you the best non-Asian minorities and also the best white/Asian students.
Quotas also have the benefit of being simple and transparent - any average Joe can figure out exactly what "fair" means, and it's also pretty transparent that some groups won't perform as well as others and why. In contrast, most of the more complex solutions obscure this fact.
[1] Here "best" is within the framework of requiring a corporatist spoils system. I don't actually favor such a system, but I'm taking the existence of such a spoils system as given.
The problem is that you run out of "good" NAMs (or women with the exact same career preferences as men, etc.) extremely quickly. The demand for "good" NAMs in any given field vastly exceeds supply, since quotas tend to be set at population proportion.
Either you allow an algorithm to be ruthlessly fair, or you introduce bias and never get the problem solved correctly, because someone, somewhere, will still find a way to gripe about the amount of bias when, inevitably, it goes against them, or is perceived to be against them due to lack of knowledge. Then you wind up bikeshedding over the bias and not the actual problem.
I am actually optimistic on Big Data's effect in equality.
Small data is actually kind of the problem. When you have limited ability to process data or limited data density then your segmentation ability is limited to small data like state, county, zip code, credit score, whether you own a home, etc.
Big data processing, big bad ML algorithms and the ubiquity of data is making advanced segmentation available that allows us to make arguably more equitable outcomes.
Bayes and discrimination law doesn't seem like good partners.
> As a result, the advertiser might have a much better understanding of who to target in the majority group, while essentially random guessing within the minority.
If this is the case, then it should be detected and ML should NOT be used for the minority class. There are many classifiers out there which work on one-class problems.