An AI ethics principle and active research area focused on ensuring AI systems produce equitable outcomes across demographic groups — a goal that involves irresolvable trade-offs between competing mathematical definitions of what 'fair' actually means.
In Depth
Fairness in AI is the aspiration that automated decision systems should treat individuals and groups equitably — not producing discriminatory outcomes based on race, gender, age, disability, or other protected characteristics. This aspiration seems straightforward, but formalizing it mathematically reveals deep tensions. Researchers have identified over 20 distinct mathematical definitions of fairness, many of which are provably incompatible with each other — meaning you cannot satisfy all of them simultaneously.
The key fairness metrics include: Demographic Parity (equal positive outcome rates across groups); Equalized Odds (equal true positive and false positive rates across groups); Predictive Parity (equal precision across groups); and Individual Fairness (similar individuals should receive similar treatment). The impossibility of satisfying all these simultaneously under realistic conditions was formally proven in 2016 — a result that fundamentally changed the AI ethics discourse from 'fix bias' to 'choose which fairness criteria matter most for this context and why.'
The choice of fairness criterion is therefore an ethical and political decision, not just a technical one. In criminal justice, equalizing false positive rates (avoiding disproportionately labeling innocent people as high-risk) may be paramount. In healthcare, equalizing false negative rates (ensuring high-risk patients aren't missed) might take priority. Different criteria reflect different underlying values about the relative harm of type I vs. type II errors. This is why meaningful AI fairness requires not just engineers, but ethicists, affected community members, legal scholars, and policymakers.
AI Fairness cannot be reduced to a single metric — the concept is contested, and the choice of which fairness criterion to optimize reflects value judgments that must be made explicitly and democratically, not hidden inside technical decisions.

