Tech-enabled Discrimination and Racial Inequity at the Border
On November 10, E. Tendayi Achiume, the United Nations special rapporteur on contemporary forms of racism, released a draft report detailing the international impacts of technology in perpetuating racism and discrimination, particularly against refugees, migrants, travelers, and other stateless persons in the context of international borders.
The report details how technical systems have the potential to be used in racially discriminatory manners in several contexts surrounding border control, immigration, and abuses of immigrants and refugees, and gives several examples where they have been demonstrated to do so. These systems privilege predominantly white populations, as they are often exempted from additional surveillance, identifying them more accurately in the case of biometric technologies.
These systems privilege predominantly white populations, as they are often exempted from additional surveillance, identifying them more accurately in the case of biometric technologies.
The special rapporteur, under the United Nations Office of the High Commissioner on Human Rights, helps to study and explain issues of racism and discrimination to the United Nations, and how these issues perpetuate human rights violations.
This report focused specifically on the impacts of digital technologies in enforcing racist and discriminatory practices within the context of international borders and refugees for two reasons.
For one, individuals fleeing a state, or otherwise traveling across borders, traditionally suffer a number of human rights abuses. In addition, a previous report from the rapporteur focused on more general forms of racial discrimination through technology, and so this report pertains specifically to this context.
Technology in the past has been used to justify several varieties of human rights abuses against marginalized groups. In the 19th and early 20th centuries, scientific theories and medical technologies were applied to justify eugenic practices and other human rights violations in many countries, including Australia, the United States, and during Nazi Germany. In particular, the UN report cited a past example of how the atrocities of the Holocaust were in part possible due to data collection facilitated by IBM.
Vigilance around how technology is used in the future is critical to protect vulnerable and marginalized populations seeking protection.
Conversations surrounding border crossing and refugees are intrinsically linked with conversations about injustice. Examples of this include the United States’ family separation policy at the southern border, atrocities suffered by refugees traveling across Africa to get to the Mediterranean coast, or Rohingya refugees being sent back to unsafe living situations from Bangladesh to Myanmar. Given the history of human rights abuses at borders and involving migration, studying potentials for abuse and its prevention is necessary.
Given the history of human rights abuses at borders and involving migration, studying potentials for abuse and its prevention is necessary.
These wrongdoings are increasingly facilitated by technology.
One example of the racialized use of technology, stated in the report, is Ireland’s unequal use of border screening technology. Only citizens of certain nations are eligible for an expedited customs process, which allows them to skip traditional security and customs procedures for a more streamlined, and less invasive, process. From a press release by the Irish Naturalization and Immigration Services, this program is only available for people arriving from the EU/EEA, Switzerland, the United States, Canada, Australia, New Zealand, and Japan.
All but two of these nations are in the Global North, and all but one are predominantly white. The result of such a policy increases the burden on people from the Global South or from nations that are predominantly non-white. BIPOC have already experienced racial profiling while traveling for years, and so a program like this which uses an electronic “eGate” to automatically exempt predominantly white travelers from traditional surveillance while not providing a pathway to the same exemption for others is unjust.
BIPOC have already experienced racial profiling while traveling for years, and so a program like this which uses an electronic “eGate” to automatically exempt predominantly white travelers from traditional surveillance while not providing a pathway to the same exemption for others is unjust.
The report also explains how travelers are commonly tracked and processed by automated systems. Border crossing systems are increasingly deploying these automated systems that rely on biometric technologies, such as facial recognition systems or fingerprint scanners, including nations like the United States, Canada, and China.
This is in spite of the fact that previous research by former Google Ethical AI team leader Dr. Timnit Gebru, Algorithmic Justice League founder Joy Buolamwini, and others have repeatedly demonstrated that these systems fail more frequently for BIPOC individuals, especially BIPOC women – only identifying their gender correctly at a rate under 66%, compared to an over 99% accuracy rate for lighter-skinned men. Images of Black women are therefore 20 times more likely to be erroneously recognized by these algorithms than white men.
The failures of these systems have had serious repercussions, such as when Robert Williams, a Black man, was wrongfully arrested by Detroit Police after an erroneous facial recognition algorithm identified him as the robber in a 2018 shoplifting incident.
Continuing to leverage these systems, which frequently fail BIPOC, against refugees and displaced populations – who are already inhibited by a number of oppressive systems — is highly discriminatory.
The report also details how groups seeking aid are also disproportionately forced into data collection programs, collecting detailed information on aid recipients such as biometric data for groups like the Red Cross. For example, according to the UN report, aid programs often require biometric data to receive assistance. Even the United Nations’ own World Food Programme (WFP) has partnered with Palantir, a data mining firm with ties to the U.S. military and large police departments with long histories of racial discrimination, and has helped Immigrations and Customs Enforcement to deport undocumented immigrants.
Requiring people in these populations to submit unchangeable, personal data about themselves in order to receive aid casts them into a wide dragnet of surveillance that is difficult, if not impossible, to later be removed from. These databases of information can then be used by agencies, such as Immigrations and Customs Enforcement or U.S. police departments, to identify suspects for crimes.
They can also be used for other government surveillance programs or even be shared with potentially harmful groups. One concern of this the UN report cited was that Bangladesh and India could share data on Rohingya refugees with Myanmar, which would make these refugees more vulnerable to targeted abuse if they were sent back to their former country.
Getting basic aid from any agency should not require the transaction of confidential data.
Another factor that the UN report detailed as a major concern of technology causing harm is in the use of autonomous border systems, such as the FRONTEX system testing unpiloted military drones. These technical systems were involved in pushing back refugees attempting to enter the EU in Greece and allowed for the scale of such abuses to be multiplied.
Additionally, “smart-border” technologies are employed along the United States-Mexico border, documented to push migrants into more dangerous and physiologically difficult routes that resulted in a higher number of migrant deaths.
As governments willfully adopt these discriminatory, technologically-assisted systems which oppress and discriminate against already marginalized communities, it is critical for those working in technology to seriously think about what they want their work to be used for, and how to exercise this agency.
Recent movements, such as the protests against Google’s involvement with a Department of Defense project to improve drone strikes, indicate an increasing recognition of this need for greater intentionality and ethical processes in technology. Recently, companies like Amazon, IBM, and Microsoft prevented law enforcement from using facial recognition, depictive of how placing restrictions on technical systems has the potential to reduce disproportionate harms to BIPOC.
The report concluded with a series of recommendations to government officials, including a moratorium on the use of surveillance technology until safeguards and regulations for human rights can be adopted to ensure greater transparency and accountability, and requiring racial equality impact assessments before adopting these systems.
Last updated 12/18/20
Read more Eraced Business & Tech:
Joey is a student at the University of Washington. He is interested in how technological systems interact with and shape power dynamics, and how we can alter these interactions to create a more just society. In his free time, Joey enjoys playing Spikeball and Frisbee, reading, and learning songs on the steel drum.