Thursday, November 12, 2020

HOW THE BLACK LIVES MATTER MOVEMENT IS RESHAPING OUR THINKING ON BIAS IN AI

Rangita de Silva de Alwis, Christian Zabilowicz and Gitanjali Swamy1

Introduction


As our public reckoning on systemic racism and structural bias reaches its climax following weeks of Black Lives Matter protests around the world, we have been asking ourselves and our academic colleagues what role AI plays in all this – and what role AI ought to play in the future. In the same way that the MeToo Movement challenged our collective conscience about sexism in the workplace, we believe that Black Lives Matter and other movements that aim to address systemic discrimination in society have the potential to reshape our thinking on bias in AI. 


Writing recently in the Atlantic, Secretary Hillary Clinton called for gender blind reviews of resumes in line with the success of gender-blind orchestra auditions. All of this calls for a fresh interrogation of the role of AI in addressing implicit bias in the workplace.   


Many before us have detailed the causes of bias in AI and in doing so have contributed hugely to the positive reforms that have followed. Yet, few have gone further than identifying and addressing the biases that exist on the surface, and fewer still have made the case for AI’s potential to provide the greatest step forward yet towards substantive equality.


Substantive equality, as opposed to formal equality, is a fundamental concept in human rights law which requires proactive and positive measures to be taken to ensure that persons who have faced historic discrimination have a genuinely equal chance of satisfying the criteria for access to a particular social good, such as employment or education.Whereas formal equality models disavow policies that aim to redress imbalances on a systemic level, a substantive sense of equality envisages an inclusive and intersectional approach that takes into account discriminatory barriers in all its forms, not just those that are obvious or intended.3 In practice, this involves analyzing the particular experiences that a person has lived through, discerning the extent of any disadvantage they have faced, and equalizing the playing field to that same extent, as and when that person seeks to access a social good.


As support for this substantive conception of equality grows, we argue that the time is ripe to shift our focus from reducing surface level bias in AI to find ways to utilize AI as a means of dismantling institutionalized discrimination. Using the employment context to support our position, we submit that this means ending our reliance on formal equality metrics to measure the success of AI systems, to make way for one based on the principle of substantive equality. While we recognize that there are risks to manipulating AI to achieve a particular outcome or result, particularly in countries with a poor track record of public governance and adhering to the rule of law, we argue that by requiring diverse interpretations of the norms subject to manipulation, and by ensuring that the underlying process is fully transparent, such risks can be effectively mitigated. 


The causes of bias in AI


The causes of bias in AI are well-documented but it is important to reiterate them here, for an understanding of such causes is crucial for recognizing why formal equality models fail.


In general, bias is said to creep into AI systems in three ways:


1. PROGRAMMER BIAS: In its most obvious form, bias can get into AI systems through the conscious and unconscious biases of their human programmers. If, for example, a company wanted to use AI to screen resumes and identify leaders within an applicant pool, and that company either conscious or unconsciously believes masculine qualities to be demonstrative of leadership, then the AI system may become discriminatory as a result of the company’s discriminatory interpretation of what constitutes leadership.


2. DATA BIAS: Bias can also find its way into AI programs through source data. Gender, racial and other prejudices can creep into data sets because data sets are often reflective of the deep-seated prejudices in society. When an AI system uses a data set that contains these prejudices, it will reproduce them in its algorithmic outcomes.


3. LEARNING ALGORITHM BIAS: Finally, AI systems that use machine-learning tools can also be biased. Machine-learning algorithms produce outcomes that are in part based on training data and in part based on their own ‘learning’ – it is this second component that can give rise to problems, for the AI system may be capable of drawing its own bias conclusions, and these conclusions will be difficult to identify.


Addressing bias using the formal equality model


After identifying the causes of bias outlined above, most of our colleagues understandably go on to discuss the various ways that we can reduce such bias. For instance, Kimberley Houser, in relation to gender-based bias in AI, argues that we need only follow ‘responsible’ practices when developing and deploying AI to mitigate the risk of bias – she explains that ‘responsible’ practices include cleaning source data before use and employing diverse programmers.4 Her view is reflective of the general academic stance on tackling bias in AI.


The problem with this approach is two-fold. First, these practices have severe limitations in and of themselves. While it is true that data sets can be balanced and more diverse slates of programmers can be employed, this, on its own, will not be enough to tackle bias to the extent that Houser suggests. Bias and discrimination are complicated phenomena that have proven to take many forms. Even if we are able to modify data sets so that they are reflective of, say, gender, this will not address the full and intersectional spectrum of bias and discrimination that women experience. For instance, Houser discusses balancing data sets by replicating the profiles of women within the data set – but how will this solve the problem if the profiles of women being replicated are of white women who have never had childcare responsibilities or never been a victim of violence; Houser’s solutions ignore the particular disadvantages attached to the circumstances a person finds themselves in. It is this complexity that suggests more is needed than formal equality.5


The second issue, and the one that is of primary concern in this article, is that the approach does not go far enough; it does not redress the institutionalized bias that may exist even when the algorithmic outcome is ‘accurate’. To give an example, consider again an AI system that screens resumes and identifies leaders. Prima facie, the Al system may be doing exactly as the designer intends – producing consistently accurate outcomes that are reached regardless of the candidate’s gender, race, sexuality, or other protected characteristic. Indeed, at this point, many would argue that we have achieved “equality”. However, removing these barriers does not mean that minorities and women who have faced a history of discrimination will in fact be equal. As Fredman explains, “those who lack the requisite qualifications as a result of past discrimination will still be unable to meet job-related criteria.”6 The formal equality model assumes that once we have equal opportunities, nothing more needs to be done. But equality of opportunity is compatible with unequal results.7


In fact, traditional AI approaches completely miss the point that diversity is a far broader issue than mere checkboxes on a few external demographic factors. To understand what diversity truly means in the human race, we must understand the underlying notion of group collective intelligence. Collective intelligence is a term used to describe a group’s collective capacity and capability to solve diverse problems.8 Human collective intelligence is created by differences in people’s perspective, heuristics, interpretations, and predictive models, and those, in turn, are shaped by not just a narrow definition of demographic but a far more inclusive view that includes all identities, experiences, and training. The human race thrives because humans as a species have understood and mastered the utilization of collective intelligence; we have acted on the insight of superior decision capability that diverse groups of average problem solvers consistently outperform homogenous groups of excellent problem solvers.9 Thus, bringing diversity into human organizations is even more about creating more effective human organizations than social justice alone. Therefore, it is critical to adjust AI algorithms to remove bias, resulting in maximal collective intelligence, a key facet of human excellence that comes from greater diversity. 


Addressing AI through a substantive equality lens


In view of the above, and in support of BLM and other movements to end institutionalized discrimination and victimization, we believe that AI must adhere to the substantive equality model. Targeting disadvantage rather than aiming at neutrality allows us not only to redress the historic and deep-seated legacy of bias and discrimination by leveling the playing field but emphasizes a representation-reinforcing theory of participation.10 Indeed, given that past discrimination and other social mechanisms have perpetually blocked the avenues for political participation by particular minorities, representation-reinforcing equality laws are needed in AI to compensate for the muffling of political voice and to open the channels for greater participation in the future. 


But we ought to offer one word of caution. We recognize that there are risks to manipulating AI to ensure a particular outcome or result; we acknowledge that our position may open the pandora’s box for abusive governments to manipulate AI to suit their agendas. However, we argue that these risks can be mitigated if two safeguards are put in place:


1. DIVERSITY OF INTERPRETATIONS: We submit that manipulation must exclusively be focused on addressing the underlying conditions causing bias. This means ensuring respect for diversity of interpretations, rather than retroactively trying to manipulate the end outcome itself. For instance, if AI is to identify leadership qualities in a resume, the algorithm should be trained to identify leadership within a context of pluralism. Leadership, in this case, should not be interpreted through the lenses of a narrow conception of masculinity, but instead look at plural definitions of leadership, including feminist and intersectional views. In other words, AI must adjust for diverse interpretations of a concept or equitable outcome.


2. TRANSPARENCY: We further submit that any manipulation should be fully transparent as well as subject to public governance. We posit that the checks and balances in a functional democracy mitigate most of the risks of foul play. This is analogous to the societal approach to ensuring the voting process in a democracy works; at no stage do we enable any manipulation of say, a government’s voting outcome or outcome based on corporate self-interest; rather, we ensure that the underlying process is fully transparent, free from bias and under public governance. 



  1. Rangita de Silva de Alwis is the Associate Dean of International Affairs at the University of Pennsylvania, Non-resident Leader in Practice at the Harvard Kennedy School of Government’s Women and Public Policy Program, and Senior Fellow at the Center on the Legal Profession, Harvard Law School. Christian Zabilowicz is a graduate of the University of Oxford and a current Thouron Scholar at the University of Pennsylvania Carey Law School. Gitanjali Swamy is the Managing Partner of IoTask and a Representative to the UN’s EQUALS Leadership Coalition. Rangita de Silva thanks David Wilkins and Martha Minow of Harvard Law School and Deborah Rhode of Stanford Law School for their insights on the issue of substantive versus formal equality.
  2. Sandra Fredman, Substantive Equality Revisited, 14.3 International Journal of Constitutional Law 712, 724.
  3. Id. Ontario Human Rights Commission, Why are special programs protected?, OHRC Guide to Special Programs and the Human Rights Code, http://www.ohrc.on.ca/en/your-guide-special-programs-and-human-rights-code/why-are-special-programs-protected (last visited July 16, 2020).
  4. Kimberly A. Houser, Can AI Solve the Diversity Problem in the Tech Industry? Mitigating Noise and Bias in Employment Decision-Making, 22 Stan. Tech. L. Rev. 290 (2019).
  5. Sandra Fredman, Substantive Equality Revisited, 14.3 International Journal of Constitutional Law 712, 730.
  6. Id at 723. 
  7. Id.
  8. While the concept of collective intelligence has been used for over a century in sociology, political science, and even science fiction, the actual term collective intelligence itself is attributed to Professor Thomas Malone of MIT, when he set up the MIT center for collective intelligence in 2006. This is detailed in his recent AI book titled “Superminds” https://cci.mit.edu/superminds/
  9. There is a much more systematic, mathematical, and computer science-based explanation for these models of collective intelligence that can be found in the work of Dr. Page and his team of collaborators from the University of Michigan. Scott E. Page, The Difference: How the Power of Diversity Creates Better Groups, Firms, Schools, and Societies, New Edition, Princeton University Press, 2008.
  10. Sandra Fredman, Substantive Equality Revisited, 14.3 International Journal of Constitutional Law 712, 729.


No comments:

Post a Comment