Skip to main content

Understanding AI Bias (and How to Address It) | January 2024

Artificial intelligence (AI) relates to machine learning that is able to perform activities and tasks that usually require human intelligence such as decisions, visual perception, and speech recognition. AI bias, which refers to the systematic prejudices within algorithms that result in unfair outcomes or discrimination against certain groups, is an important business ethics issue. These biases can significantly impact decision-making processes, perpetuating societal inequalities.

Causes of AI Bias

A significant source of AI bias is biased data. Machine learning models learn from historical data and programming, inheriting any biases present within it. If the training data is skewed or unrepresentative, the model can perpetuate existing biases. For example, if a company trains an AI recruitment tool with historical data that favors male applicants, then the algorithms will continue to favor male applicants. In one example, Amazon had to scrap a recruiting engine it developed after discovering it showed bias against women.

Algorithmic design choices also contribute to bias. Decisions made during algorithm development by humans, such as feature selection or model architecture, can amplify biases present in the data, further influencing outcomes.

Additionally, a lack of diversity in development teams can inadvertently lead to overlooked biases. The technology sector has a major diversity problem. Diverse perspectives and experiences are crucial in identifying and addressing potential biases within AI systems. This is one of the many reasons why diversity is important in business.

Another critical factor is the implicit biases of the developers themselves. AI models are made by people, after all. Unconscious biases can seep into the AI systems during their creation, affecting decision-making processes and outcomes.

Avoiding AI Bias

There are several strategies to avoid AI bias, but diversity in applications and algorithms makes it impossible to find one clear path. Using diverse and representative data sets is fundamental. Carefully curated datasets that account for various demographics can significantly reduce bias.

Continuous audits and testing of AI systems are essential. Regular monitoring helps detect biases, and rigorous testing against diverse scenarios allows for the identification and rectification of potential biases before deployment. OpenAI, the company behind chatbot ChatGPT, trains models with large datasets from the internet, trains them again with a narrower dataset curated by human reviewers following set guidelines, and fine-tunes the model by keeping feedback channels open.

Additionally, transparency and accountability are vital. AI systems should be transparent in their decision-making processes, providing explanations that enable understanding of why certain decisions are made. This facilitates bias identification and corrective action. Companies should also be held accountable for the AI models they develop.

Encouraging diversity within development teams is also pivotal. Diverse teams bring different perspectives, aiding in identifying and mitigating biases effectively.

Ethical Implications

The ethical implications of AI bias are of great concern. Biased AI systems can perpetuate discrimination and unfairness, affecting opportunities in areas such as employment, finance, and healthcare. Lack of accountability in biased AI decision-making processes poses challenges in taking corrective action.

Moreover, these systems can inadvertently reinforce societal stereotypes, deepening social divides and hindering progress toward a more equitable society. Legal and regulatory concerns regarding the accountability and fairness of AI systems further underscore the urgency of addressing bias.

Tackling AI bias requires concerted efforts in data collection, algorithmic design, diversity in development, and adherence to ethical guidelines. Through these strategies, it's possible to mitigate bias and ensure AI systems promote fairness, inclusivity, and ethical decision-making.

In the Classroom

This article can be used to discuss business ethics (Chapter 2: Business Ethics and Social Responsibility).

Discussion Questions

  1. What is AI bias?
  2. What factors can contribute to AI bias?
  3. How can AI bias be reduced (or eliminated)?

This article was developed with the support of Kelsey Reddick for and under the direction of O.C. Ferrell, Linda Ferrell, and Geoff Hirt.


Sources

Cheyenne DeVon, "How to Reduce AI Bias, According to Tech Expert," CNBC, December 16, 2023, https://www.cnbc.com/2023/12/16/how-to-reduce-ai-bias-according-to-tech-expert.html?__source=iosappshare%7Ccom.apple.UIKit.activity.Mail

Michael Li, "To Build Less-Biased AI, Hire a More-Diverse Team," Harvard Business Review, October 26, 2020, https://hbr.org/2020/10/to-build-less-biased-ai-hire-a-more-diverse-team

Monika Mueller, "The Ethics Of AI: Navigating Bias, Manipulation and Beyond," Forbes, June 23, 2023, https://www.forbes.com/sites/forbestechcouncil/2023/06/23/the-ethics-of-ai-navigating-bias-manipulation-and-beyond/?sh=5015c85e40a2

About the Author

O.C. Ferrell is the James T. Pursell Sr. Eminent Scholar in Ethics and Director of the Center for Ethical Organizational Cultures in the Raymond J. Harbert College of Business, Auburn University. He was formerly Distinguished Professor of Leadership and Business Ethics at Belmont University and University Distinguished Professor at the University of New Mexico. He has also been on the faculties of the University of Wyoming, Colorado State University, University of Memphis, Texas A&M University, Illinois State University, and Southern Illinois University. He received his Ph.D. in marketing from Louisiana State University.

Profile Photo of OC Ferrell