0 / 0
Decision bias risk for AI

Decision bias risk for AI

Risks associated with output
Fairness
New

Description

Decision bias occurs when one group is unfairly advantaged over another due to decisions of the model. This bias can result from bias in the training data or as an unintended consequence of how the model was trained.

Why is decision bias a concern for foundation models?

Bias can harm persons affected by the decisions of the model. Business entities could face fines, reputational harms, and other legal consequences.

Background image for risks associated with input
Example

Unfair health risk assignment for black patients

A study on racial bias in health algorithms estimated that racial bias reduces the number of black patients identified for extra care by more than half. The study found that bias occurred because the algorithm used health costs as a proxy for health needs. Less money is spent on black patients who have the same level of need, and the algorithm thus falsely concludes that black patients are healthier than equally sick white patients.

Parent topic: AI risk atlas

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more