0 / 0
Incorrect risk testing risk for AI
Last updated: Dec 12, 2024
Incorrect risk testing risk for AI
Governance Icon representing governance risks.
Non-technical risks
Governance
Amplified by generative AI

Description

A metric selected to measure or track a risk is incorrectly selected, incompletely measuring the risk, or measuring the wrong risk for the given context.

Why is incorrect risk testing a concern for foundation models?

If the metrics do not measure the risk as intended, then the understanding of that risk will be incorrect and mitigations might not be applied. If the model’s output is consequential, this might result in societal, reputational, or financial harm.

Parent topic: AI risk atlas

We provide examples covered by the press to help explain many of the foundation models' risks. Many of these events covered by the press are either still evolving or have been resolved, and referencing them can help the reader understand the potential risks and work towards mitigations. Highlighting these examples are for illustrative purposes only.

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more