Skip to main content

Tech’s Tangled Web: Intersectionality and Surveillance in AI

Please note that the goal of Sociology in the News is to provide thought-provoking content and topics that encourage open-ended discussion and engagement in the field of higher education. The position of the individuals who contribute to this site does not represent the thoughts and opinions of the McGraw Hill Education organization and its employees. We value your opinion and welcome your feedback.

In the age of rapid technological advancement, artificial intelligence (AI) has emerged as a cornerstone of innovation. However, this progress has not been without challenges. Two of the most critical issues are the intersectionality of biases in AI, particularly in facial recognition technology and the societal implications of ubiquitous surveillance. Facial recognition software identifies or confirms a person’s identity using their facial features. It uses machine learning algorithms to match facial features to existing images of the individual. This article delves into biases and surveillance, exploring how AI can perpetuate existing disparities and reshape societal norms.

Intersectionality in AI and the Gender Shades Project

The concept of intersectionality, a term Kimberlé Crenshaw coined in 1989, is crucial in understanding biases in AI (Crenshaw, 1989). It refers to the overlapping systems of discrimination and disadvantage individuals experience due to various aspects of their identity, such as race and gender. In the realm of facial recognition technology, these biases are alarmingly evident.

The Gender Shades Project, led by Joy Buolamwini at the MIT Media Lab, brought to light the significant inaccuracies in commercial facial recognition software when identifying women and people of color (Buolamwini & Gebru, 2018).  For example, in 2023, ​​Porcha Woodruff, a Black woman who was eight months pregnant, became the first female victim of wrongful arrest due to a faulty facial recognition match by the Detroit Police Department (ACLU, 2023). This incident marks the sixth case of misidentification involving Black Americans by the department, highlighting significant concerns over the racial biases and inaccuracies of facial recognition technology. Woodruff’s wrongful arrest underscores the urgent need for revisiting the use of such surveillance technologies in law enforcement, raising critical questions about privacy, racial justice, and the reliability of AI in policing practices (ACLU of Michigan, 2023).

Buolamwini’s research revealed that these systems had a disproportionately high error rate for darker-skinned females, demonstrating a clear instance of compounded bias. This finding is particularly concerning given the widespread adoption of facial recognition technology across various sectors, from security to employment and beyond.

The Societal Ramifications of Ubiquitous Surveillance

The rise of facial recognition technology also brings forth the specter of ubiquitous surveillance. Sociological theories like panopticism, introduced by Michel Foucault, offer a lens through which to understand these phenomena (Foucault, 1977). Panopticism describes a societal condition where individuals are constantly observed, leading to a state of conscious and permanent visibility that assures automatic functioning of power. Panopticism suggests that surveillance technologies like facial recognition led people to self-regulate their behavior because they assume they are always being watched, thereby reinforcing control without direct oversight.

This theory finds a modern parallel in the use of facial recognition by law enforcement and other entities, exemplifying a shift toward a surveillance society.

In the United Kingdom, for example, the use of live facial recognition by London’s Metropolitan Police has raised significant concerns. Big Brother Watch, a UK advocacy group, has criticized this practice for its potential to infringe on civil liberties and privacy (Big Brother Watch, 2021). The widespread use of such surveillance technologies can lead to a chilling effect on public life, altering the way individuals behave and interact in public spaces.

In the United States, over half the adult population appears in facial recognition databases accessible to police, often without warrants (Levin, 2016). This inclusion raises profound privacy and racial bias concerns, with over 117 million adults in a “virtual, perpetual lineup.” Such technologies, criticized for inaccuracies and biases, disproportionately impact communities of color, further entrenching systemic disparities. This scenario underscores the urgent need for regulatory oversight and ethical frameworks governing surveillance technologies, ensuring they serve public safety without compromising fundamental rights and freedoms.

Combining Perspectives: Intersectionality and Surveillance

When the lenses of intersectionality and surveillance are combined, a more complex picture of AI’s societal impact emerges. Technologies that inherently carry biases can not only misidentify but also unjustly target marginalized communities. This intersection of flawed AI with increased surveillance capabilities can exacerbate existing inequalities, leading to a cycle where certain groups are more likely to be surveilled and, consequently, misidentified or falsely accused.

In 2020, Robert Williams, a Black man from Detroit, became a stark example of AI's flawed application, being wrongfully arrested due to a facial recognition mismatch (Bhuiyan, 2023). His experience in Detroit, where grainy surveillance footage was incorrectly matched to his driver's license photo, underscores the risks of bias within AI technologies. Williams’s case, highlighted in California’s legislative discussions, emphasizes the urgency for regulations to prevent misuse and protect individuals, especially from marginalized communities, from being unjustly targeted or accused by such systems.

Furthermore, the lack of transparency and accountability in the development and deployment of these technologies poses a significant challenge. As governments and private entities increasingly rely on AI for decision making, there is a pressing need for regulations that address these biases and protect individual rights.

The ACLU and other civil rights organizations emphasize the need for ethical oversight, transparency, and public accountability in the deployment of facial recognition to protect civil liberties and ensure equity. The Electronic Frontier Foundation (EFF) advocates for privacy and strict controls on facial recognition (Schwartz, 2021). Fight for the Future mobilizes against its use in public surveillance (Fight for the Future, 2024), and the Algorithmic Justice League (AJL) exposes biases within AI technologies (AJL, 2024).

Conclusion

The exploration of intersectionality in AI and the societal implications of ubiquitous surveillance reveals a complex web of challenges that must be navigated. The Gender Shades Project and the situation in the United Kingdom are just two examples of the broader implications of these technologies. As AI continues to evolve, it is imperative that developers, policymakers, and society remain vigilant about these issues, ensuring that technology serves as a tool for empowerment rather than a means of perpetuating existing disparities or encroaching on freedoms.

Discussion Questions

  1. Intersectionality in AI: How does the concept of intersectionality help us understand the biases present in facial recognition technology? How might biases in AI technology disproportionately affect marginalized communities? Discuss with examples from the Gender Shades Project.
  2. Societal Implications of Surveillance: What are the potential social consequences of widespread surveillance, especially with the use of technologies like facial recognition? Consider both positive and negative outcomes. How should developers and policymakers address these consequences?
  3. Role of Legislation and Regulation: What types of laws and regulations could be effective in governing the use of AI and surveillance technologies? Discuss the balance between innovation and privacy rights.
  4. Technological Solutions to Bias: What are some technological innovations or solutions that could help reduce bias in AI systems? Evaluate their potential effectiveness and limitations.
  5. Personal Privacy vs. Public Security: How do we balance the need for personal privacy with the potential benefits of AI and surveillance for public security? Discuss this in the context of both individual rights and societal welfare.

References

ACLU of Michigan. (2023, August 6). After Third Wrongful Arrest, ACLU Slams Detroit Police Department for Continuing to Use Faulty Facial Recognition Technology. American Civil Liberties Union. https://www.aclu.org/press-releases/

Algorithmic Justice League. (2024). Amplify your voice. Make a donation to AJL. Retrieved from https://www.ajlunited.org/

Big Brother Watch. (2021). Facial Recognition in the UK. Retrieved from https://bigbrotherwatch.org.uk/campaigns/stop-facial-recognition/

Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of Machine Learning Research, 81, 1-15.

Bhuiyan, J. (2023, April 27). First man wrongfully arrested because of facial recognition testifies as California weighs new bills. The Guardian. Retrieved from https://www.theguardian.com/us-news/2023/apr/27/california-police-facial-recognition-software

Crenshaw, K. (1989). Demarginalizing the Intersection of Race and Sex: A Black Feminist Critique of Antidiscrimination Doctrine, Feminist Theory and Antiracist Politics. University of Chicago Legal Forum, 1989(1), 139-167.

Fight for the Future. (2024). Who are we? Retrieved from https://www.fightforthefuture.org/

Foucault, M. (1977). Discipline and Punish: The Birth of the Prison. New York: Pantheon Books.

Levin, S. (2016, October 18). Half of US adults are recorded in police facial recognition databases, study says. The Guardian. https://www.theguardian.com/world/2016/oct/18/police-facial-recognition-database-surveillance-profiling

Schwartz, A. (2021, October 26). Resisting the Menace of Face Recognition. Electronic Frontier Foundation. Retrieved from https://www.eff.org/

About the Author

David Woodring, PhD, is a criminologist/medical sociologist who currently serves as an adjunct instructor for Southern New Hampshire University, Eastern Gateway Community College, and Northwest Arkansas Community College, guiding students across a variety of subjects from cultural awareness in online learning to introductory sociology and social problems. Dr. Woodring also serves as a consultant for NewLearningSociology.com, where he teaches instructors/educators how to enhance teaching, learning, and instructional design with new digital technologies, including artificial intelligence. His most recent book Beating ChatGPT with ChatGPT, teaches instructors how to use generative AI to design assignments that students can’t simply paste into ChatGPT for answers. To learn more about Dr. Woodring’s book Beating ChatGPT with ChatGPT visit: bit.ly/3OVVtNB

Profile Photo of David Woodring, PhD