0 / 0
Data bias risk for AI

Data bias risk for AI

Risks associated with input
Training and tuning phase
Fairness
Amplified

Description

Historical, representational, and societal biases present in the data used to train and fine tune the model can adversely affect model behavior.

Why is data bias a concern for foundation models?

Training an AI system on data with bias, such as historical or representational bias, could lead to biased or skewed outputs that may unfairly represent or otherwise discriminate against certain groups or individuals. In addition to negative societal impacts, business entities could face legal consequences or reputational harms from biased model outcomes.

Background image for risks associated with input
Example

Healthcare Bias

Research on reinforcing disparities in medicine highlights that using data and AI to transform how people receive healthcare is only as strong as the data behind it, meaning use of training data with poor minority representation can lead to growing health inequalities.

Parent topic: AI risk atlas

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more