0 / 0
Unreliable source attribution risk for AI

Unreliable source attribution risk for AI

Risks associated with output
Explainability
Amplified by generative AI

Description

Source attribution is the AI system's ability to describe from what training data it generated a portion or all its output. Since current techniques are based on approximations, these attributions might be incorrect.

Why is unreliable source attribution a concern for foundation models?

Low quality explanations make it difficult for users, model validators, and auditors to understand and trust the model.

Parent topic: AI risk atlas

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more