Fairness
Field attempting to correct algorithmic bias. Involves complex trade-offs, as there's no single mathematical definition of what is 'fair'.
Key Concepts
Algorithmic Bias
Systematic and repeatable errors in a computer system that create unfair outcomes, such as privileging one arbitrary group of users over others.
Disparate Treatment
When a model's predictions are different for different subgroups. For example, a loan application model that is more likely to deny loans to applicants from a certain demographic group.
Disparate Impact
When a model's outcomes are different for different subgroups, even if the model's predictions are the same. For example, a hiring model that recommends candidates from one demographic group at a higher rate than another.
Statistical Parity
A fairness metric that is satisfied if the model's predictions are independent of the sensitive attribute. For example, a loan application model that has the same approval rate for all demographic groups.
Detailed Explanation
Fairness in AI is the field dedicated to ensuring that machine learning models do not perpetuate or amplify existing societal biases. It involves a complex set of trade-offs, as there is no single mathematical definition of what is 'fair'. The goal is to develop models that are not only accurate but also equitable in their outcomes across different demographic groups.
The Challenge of Defining Fairness
There are over 20 different mathematical definitions of fairness, each with its own set of assumptions and trade-offs. For example, a model that is fair according to one definition may be unfair according to another. This makes it challenging to choose the right fairness metric for a given application.
Sources of Bias
Bias in AI models can come from a variety of sources, including:
- Data Bias: When the data used to train a model is not representative of the real world. For example, a facial recognition model trained on a dataset of mostly light-skinned faces may not perform as well on dark-skinned faces.
- Algorithmic Bias: When the algorithm itself is biased. For example, an algorithm that is designed to maximize accuracy may inadvertently perpetuate existing societal biases.
- Human Bias: When the people who design and build AI models have their own biases, which can be reflected in the models they create.
The Importance of Fairness
As AI models are increasingly used to make important decisions about people's lives, it is essential to ensure that they are fair and equitable. Unfair AI models can have a devastating impact on individuals and communities, from denying people loans and jobs to sending them to prison.
Real-World Examples & Use Cases
Criminal Justice
AI models are used to predict the likelihood of a defendant re-offending, which can influence bail and sentencing decisions. However, these models have been shown to be biased against minority groups, leading to unfair outcomes.
Hiring
AI-powered hiring tools are used to screen resumes and predict which candidates are most likely to be successful. However, these tools can be biased against women and minorities, leading to discriminatory hiring practices.
Loan Applications
AI models are used to assess creditworthiness and make loan decisions. However, these models can be biased against people who live in certain neighborhoods or have certain demographic characteristics, leading to unfair lending practices.
Online Advertising
AI-powered advertising platforms are used to target ads to specific users. However, these platforms can be used to discriminate against certain groups of people, such as by showing them ads for lower-paying jobs or more expensive products.