0 / 0
Untraceable attribution risk for AI

Untraceable attribution risk for AI

Risks associated with output
Explainability
Amplified by generative AI

Description

The original entity from which training data comes from might not be known, limiting the utility and success of source attribution techniques.

Why is untraceable attribution a concern for foundation models?

The inability to provide the provenance for an explanation makes it difficult for users, model validators, and auditors to understand and trust the model.

Parent topic: AI risk atlas

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more