0 / 0
Incomplete advice risk for AI
Last updated: Dec 12, 2024
Incomplete advice risk for AI
Alignment Icon representing alignment risks.
Risks associated with output
Value alignment
New to generative AI

Description

When a model provides advice without having enough information, resulting in possible harm if the advice is followed.

Why is incomplete advice a concern for foundation models?

A person might act on incomplete advice or worry about a situation that is not applicable to them due to the overgeneralized nature of the content generated. For example, a model might provide incorrect medical, financial, and legal advice or recommendations that the end user might act on, resulting in harmful actions.

Background image for risks associated with input
Example

Misguiding Advice

As per the source article, an AI chatbot created by New York City to help small business owners provided incorrect and/or harmful advice that misstated local policies and advised companies to violate the law. The chatbot falsely suggested that businesses can put trash in black garbage bags and are not required to compost, which contradicts with two of city’s signature waste initiatives. Also, asked if a restaurant could serve cheese nibbled on by a rodent, it responded affirmatively.

Parent topic: AI risk atlas

We provide examples covered by the press to help explain many of the foundation models' risks. Many of these events covered by the press are either still evolving or have been resolved, and referencing them can help the reader understand the potential risks and work towards mitigations. Highlighting these examples are for illustrative purposes only.

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more