0 / 0
Data bias risk for AI

Data bias risk for AI

Fairness Icon representing fairness risks.
Risks associated with input
Training and tuning phase
Fairness
Amplified by generative AI

Description

Historical, representational, and societal biases present in the data used to train and fine tune the model can adversely affect model behavior.

Why is data bias a concern for foundation models?

Training an AI system on data with bias, such as historical or representational bias, could lead to biased or skewed outputs that may unfairly represent or otherwise discriminate against certain groups or individuals. In addition to negative societal impacts, business entities could face legal consequences or reputational harms from biased model outcomes.

Background image for risks associated with input
Example

Healthcare Bias

According to the research article on reinforcing disparities in medicine using data and AI applications to transform how people receive healthcare is only as strong as the data behind the effort. For example, using training data with poor minority representation or that reflects what is already unequal care can lead to increased health inequalities.

Parent topic: AI risk atlas

We provide examples covered by the press to help explain many of the foundation models' risks. Many of these events covered by the press are either still evolving or have been resolved, and referencing them can help the reader understand the potential risks and work towards mitigations. Highlighting these examples are for illustrative purposes only.

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more