Benign advice risk for AI
When a model generates information that is factually correct but not specific enough for the current context, the benign advice can be potentially harmful. For example, a model might provide medical, financial, and legal advice or recommendations for a specific problem that the end user may act on even when they should not.
Why is benign advice a concern for foundation models?
A person might act on incomplete advice or worry about a situation that is not applicable to them due to the overgeneralized nature of the content generated.
Parent topic: AI risk atlas