0 / 0
Benign advice risk for AI

Benign advice risk for AI

Risks associated with output
Value alignment
New

Description

When a model generates information that is factually correct but not specific enough for the current context, the benign advice can be potentially harmful. For example, a model might provide medical, financial, and legal advice or recommendations for a specific problem that the end user may act on even when they should not.

Why is benign advice a concern for foundation models?

A person might act on incomplete advice or worry about a situation that is not applicable to them due to the overgeneralized nature of the content generated.

Parent topic: AI risk atlas

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more