In the last few days of the Trump administration, the Department of Justice attempted to ram through a regulatory change to Title Six of the Civil Rights Act of 1964. Raising the existing standard of “disparate impact,” the administration sought to require an intent to discriminate to be considered a violation of the Act. Civil rights organizations condemned the move due to the difficulty of proving intent to discriminate in such cases.
“[I]t’s difficult to prove racially motivated intent behind [policies],” Shiwali Patel, senior counsel for the National Women’s Law Center, told the New York Times.
Rule changes like this, as well as other standards of intent, raise complex issues in our increasingly technologically-mediated society. For example, Sony AI researcher Alice Xiang raised the question on Twitter, “what would such a change on intent mean for discrimination in artificial intelligence systems?”
It has been well-established that automated decision-making systems (ADMSs), or technological systems which make decisions without human input, have racist outcomes. This has been demonstrated from work in facial recognition and search results to recidivism algorithms and rideshare pricing.
Even beyond the context of ADMSs, there are many examples of racist outcomes being propagated by artificial intelligence-based systems. For example, Microsoft shut down its Tay chatbot less than a day after its creation because it began spreading virulently anti-racist and anti-Semitic messages. Meanwhile, GPT-3, a state-of-the-art language generation system, has also been shown to generate statements rooted in racist pseudoscience and articulate other positions antithetical to basic human rights.
Attempts to address potential harms of future technological systems have also been made. For example, in the Moral Machines project, researchers at the Massachusetts Institute of Technology attempted to crowdsource decisions on whose protection self-driving cars should prioritize, through presenting participants with a series of forced choices of which person or people the car, among two options, should kill.
The Moral Machines did not differentiate people into racial categories, although it did differentiate on occupation, gender, age, and law-abiding status. Therefore, having an intent to racially discriminate, even if the Moral Machine was capable of intending to kill one of these groups more than another, would be impossible.
However, if we look at other societal injustices, particularly in the United States context, we could see that a decision-making process for a Moral Machine could have highly discriminatory outcomes. For example, if the self-driving car prioritized the lives of the elderly, and deprioritized the lives of criminals – both categories that the Machine was allowed to make decisions on – the car could be highly racially discriminatory due to systemic biases in the criminal justice system against BIPOC populations and the longstanding gap in life expectancy between racial groups.
Additionally, as technology ethics researcher Abby Jaques explains, “the problem is that an algorithm isn’t a person, it’s a policy. And you don’t get policy right by just assuming that an answer that might be fine in an individual case will generalize.” Rather than examining a specific scenario and saying what ought to be best in that specific case, to make moral AI systems, we need to take a more overarching policy-driven approach.
If we only consider individual-level cases when generating ethics policies, we will fail to see the rules and norms that result. One example that Dr. Jaques uses is if a self-driving car, all else being equal, had a rule to swerve to avoid pedestrians legally in a crosswalk, potentially damaging the car and killing the driver, but would make no such avoiding swerve for jaywalkers. While this might seem more just in an individual case, the resulting rules would amount “to implementing the death penalty for jaywalking.”
Even at a more fundamental level, while the factors that the Moral Machine used to discriminate between people in its scenarios, including gender, age, physical fitness, economic prosperity, and criminality, are both immoral to discriminate on and in many cases would be illegal under US law. The conclusions of the report, which found respondents were more willing to let the elderly, the physically unfit, the homeless, and criminals die, reveals abhorrent implicit biases in our society and is scarily reminiscent of past eugenics movements.
If a system of self-driving cars disproportionately killed or injured BIPOC, this would rightly face widespread condemnation. Having machines that would disproportionately murder a particular racial subset of a population would call to mind images of a technological genocide. Allowing a discriminatory system like this to exist without consequences is incompatible with a just society.
However, under a standard requiring intent to prove instances of discrimination, rather than just disparate impact, such a system would likely be allowed to exist. For one, since artificial intelligence systems do not currently exhibit consciousness – nor are they expected to for a long time – it is not possible for such a system to have intent to discriminate.
Race-blind policies, or policies which do not take into account racial disparities and systemic injustice in their implementation, have a history of perpetuating and exacerbating these inequities.
Looking at the actual human designers of the system might provide a better opportunity to look for intent to discriminate. However, proving intent is incredibly difficult, and since this hypothetical system was not designed to look at race, proving an intent to racially discriminate would be challenging at best.
Race-blind policies, or policies which do not take into account racial disparities and systemic injustice in their implementation, have a history of perpetuating and exacerbating these inequities. Indeed, many of the examples of technologically-enabled racism previously discussed, like the recidivism and ride-sharing algorithms, do not explicitly incorporate race.
Additionally, approaching ethics and the design of ethical systems as a uniquely individualistic approach is in itself potentially problematic. According to some theories of philosophy, such as the South African Ubuntu framework for approaching ethics, ethics are inherently relational and contextualized.
As a result, a more ethical approach would be to focus less on the intent of either the system or its designers, but on the outcomes and impacts that these people and systems would have on communities. Recognizing the interactions between communities and these systems might prove to be a more fruitful approach to creating a more just future with more equitable sociotechnical systems.
Just because discrimination is not intended by a policy, action, or AI system does not mean that discrimination won’t be a side effect or direct outcome of that policy. Centering these discriminatory intents of either the non-conscious AI system or the development teams that created it, rather than the harms these systems cause, is a poor model for technology policy and will hinder efforts for equity as these systems interact with more and more of our social systems.
Last Updated 4/22/2021