ViqusViqus
Navigate
Company
Blog
About Us
Contact
System Status
Enter Viqus Hub
Back to Glossary
Ethics & Society Intermediate Also: AI Bias, Machine Learning Bias, Data Bias

Algorithmic Bias

Definition

The tendency of AI systems to produce systematically unfair or discriminatory outcomes for certain groups — arising from biased training data, flawed model assumptions, or the contexts in which systems are deployed.

In Depth

Algorithmic Bias occurs when an AI system produces systematically different, and often unfair, outcomes for different demographic groups. It can arise at multiple points in the ML pipeline: historical bias in training data that reflects past discriminatory practices; representation bias when certain groups are underrepresented in training sets; measurement bias in how the target variable is defined; aggregation bias when models trained on mixed populations fail specific subgroups; and deployment bias when models are used in contexts different from those they were trained on.

Real-world examples illustrate the stakes. Amazon's automated resume screening tool penalized resumes containing the word 'women's' and downranked graduates of all-women's colleges — because it was trained on 10 years of predominantly male resumes. Facial recognition systems have shown dramatically higher error rates for darker-skinned women than lighter-skinned men, raising serious concerns about their use in law enforcement. The COMPAS recidivism algorithm was shown to incorrectly flag Black defendants as high-risk at twice the rate of white defendants.

Detecting and mitigating algorithmic bias is both a technical and organizational challenge. Technical approaches include auditing model outputs across demographic subgroups, rebalancing training data, applying fairness constraints during training, and using post-processing calibration. But technical fixes are insufficient alone — bias often reflects structural inequities in data that cannot be resolved by the algorithm alone. Meaningful solutions require diverse development teams, stakeholder engagement, ongoing monitoring post-deployment, and governance accountability.

Key Takeaway

Algorithmic Bias is not a bug to be patched — it is often a mirror reflecting historical inequities in data. Addressing it requires both technical rigor and the humility to involve affected communities in AI design.

Real-World Applications

01 Hiring AI audits: examining whether automated screening tools systematically filter out qualified candidates from underrepresented groups.
02 Facial recognition fairness testing: benchmarking recognition accuracy across gender and skin tone to identify performance disparities.
03 Credit scoring review: auditing whether ML-based credit scoring models systematically disadvantage minority applicants.
04 Healthcare AI equity: evaluating whether clinical prediction models trained on non-representative data perform worse for certain patient populations.
05 Content recommendation audits: assessing whether recommendation algorithms systematically expose certain users to more harmful or polarizing content.