Viqus Logo Viqus Logo
Home
Categories
Language Models Generative Imagery Hardware & Chips Business & Funding Ethics & Society Science & Robotics
Resources
AI Glossary Academy CLI Tool Labs
About Contact
Back to Glossary
Ethics & Society Advanced Also: Algorithmic Fairness, ML Fairness

Fairness

Definition

An AI ethics principle and active research area focused on ensuring AI systems produce equitable outcomes across demographic groups — a goal that involves irresolvable trade-offs between competing mathematical definitions of what 'fair' actually means.

In Depth

Fairness in AI is the aspiration that automated decision systems should treat individuals and groups equitably — not producing discriminatory outcomes based on race, gender, age, disability, or other protected characteristics. This aspiration seems straightforward, but formalizing it mathematically reveals deep tensions. Researchers have identified over 20 distinct mathematical definitions of fairness, many of which are provably incompatible with each other — meaning you cannot satisfy all of them simultaneously.

The key fairness metrics include: Demographic Parity (equal positive outcome rates across groups); Equalized Odds (equal true positive and false positive rates across groups); Predictive Parity (equal precision across groups); and Individual Fairness (similar individuals should receive similar treatment). The impossibility of satisfying all these simultaneously under realistic conditions was formally proven in 2016 — a result that fundamentally changed the AI ethics discourse from 'fix bias' to 'choose which fairness criteria matter most for this context and why.'

The choice of fairness criterion is therefore an ethical and political decision, not just a technical one. In criminal justice, equalizing false positive rates (avoiding disproportionately labeling innocent people as high-risk) may be paramount. In healthcare, equalizing false negative rates (ensuring high-risk patients aren't missed) might take priority. Different criteria reflect different underlying values about the relative harm of type I vs. type II errors. This is why meaningful AI fairness requires not just engineers, but ethicists, affected community members, legal scholars, and policymakers.

Key Takeaway

AI Fairness cannot be reduced to a single metric — the concept is contested, and the choice of which fairness criterion to optimize reflects value judgments that must be made explicitly and democratically, not hidden inside technical decisions.

Real-World Applications

01 Loan approval systems: auditing whether approval rates and error rates are equitable across racial and gender groups.
02 Healthcare triage: ensuring early-warning clinical models don't under-predict risk for underserved patient populations.
03 Hiring platforms: testing whether AI screening tools satisfy demographic parity or equalized odds across applicant demographics.
04 Facial recognition policy: using fairness analysis to define acceptable performance gap thresholds before permitting law enforcement use.
05 Educational AI: ensuring adaptive learning platforms don't systematically provide lower-quality interventions to students from disadvantaged backgrounds.

Frequently Asked Questions

What are the main fairness metrics in AI?

Key metrics include: demographic parity (equal positive prediction rates across groups), equalized odds (equal true positive and false positive rates), calibration (predictions mean the same thing regardless of group), individual fairness (similar individuals get similar outcomes), and counterfactual fairness (the outcome wouldn't change if only the protected attribute changed). These metrics often conflict — satisfying all simultaneously is mathematically impossible.

Can AI be both fair and accurate?

There are inherent tradeoffs. Enforcing strict fairness constraints can reduce overall accuracy, while maximizing accuracy can disproportionately affect minority groups. The Impossibility Theorem (Chouldechova, 2017) proves that certain fairness metrics are mathematically incompatible except in special cases. The practical approach is choosing the fairness definition most appropriate for the specific context and accepting a bounded accuracy cost.

Who decides what 'fair' means for an AI system?

This is fundamentally a social question, not a technical one. Different stakeholders — affected communities, policymakers, developers, ethicists — may have conflicting notions of fairness. Best practice involves participatory design with affected populations, regulatory guidance, ethical review boards, and transparent documentation of which fairness definition was chosen and why. Technical tools implement fairness; they don't define it.