ViqusViqus
Navigate
Company
Blog
About Us
Contact
System Status
Enter Viqus Hub
Back to Glossary
Ethics & Society Advanced Also: Algorithmic Fairness, ML Fairness

Fairness

Definition

An AI ethics principle and active research area focused on ensuring AI systems produce equitable outcomes across demographic groups — a goal that involves irresolvable trade-offs between competing mathematical definitions of what 'fair' actually means.

In Depth

Fairness in AI is the aspiration that automated decision systems should treat individuals and groups equitably — not producing discriminatory outcomes based on race, gender, age, disability, or other protected characteristics. This aspiration seems straightforward, but formalizing it mathematically reveals deep tensions. Researchers have identified over 20 distinct mathematical definitions of fairness, many of which are provably incompatible with each other — meaning you cannot satisfy all of them simultaneously.

The key fairness metrics include: Demographic Parity (equal positive outcome rates across groups); Equalized Odds (equal true positive and false positive rates across groups); Predictive Parity (equal precision across groups); and Individual Fairness (similar individuals should receive similar treatment). The impossibility of satisfying all these simultaneously under realistic conditions was formally proven in 2016 — a result that fundamentally changed the AI ethics discourse from 'fix bias' to 'choose which fairness criteria matter most for this context and why.'

The choice of fairness criterion is therefore an ethical and political decision, not just a technical one. In criminal justice, equalizing false positive rates (avoiding disproportionately labeling innocent people as high-risk) may be paramount. In healthcare, equalizing false negative rates (ensuring high-risk patients aren't missed) might take priority. Different criteria reflect different underlying values about the relative harm of type I vs. type II errors. This is why meaningful AI fairness requires not just engineers, but ethicists, affected community members, legal scholars, and policymakers.

Key Takeaway

AI Fairness cannot be reduced to a single metric — the concept is contested, and the choice of which fairness criterion to optimize reflects value judgments that must be made explicitly and democratically, not hidden inside technical decisions.

Real-World Applications

01 Loan approval systems: auditing whether approval rates and error rates are equitable across racial and gender groups.
02 Healthcare triage: ensuring early-warning clinical models don't under-predict risk for underserved patient populations.
03 Hiring platforms: testing whether AI screening tools satisfy demographic parity or equalized odds across applicant demographics.
04 Facial recognition policy: using fairness analysis to define acceptable performance gap thresholds before permitting law enforcement use.
05 Educational AI: ensuring adaptive learning platforms don't systematically provide lower-quality interventions to students from disadvantaged backgrounds.