0 / 0
Hallucination risk for AI
Last updated: Dec 12, 2024
Hallucination risk for AI
Robustess Icon representing robustness risks.
Risks associated with output
Robustness
New to generative AI

Description

Hallucinations generate factually inaccurate or untruthful content with respect to the model’s training data or input. This is also sometimes referred to lack of faithfulness or lack of groundedness.

Why is hallucination a concern for foundation models?

Hallucinations can be misleading. These false outputs can mislead users and be incorporated into downstream artifacts, further spreading misinformation. False output can harm both owners and users of the AI models. In some uses, hallucinations can be particularly consequential.

Background image for risks associated with input
Example

Fake Legal Cases

According to the source article, a lawyer cited fake cases and quotations that are generated by ChatGPT in a legal brief that is filed in federal court. The lawyers consulted ChatGPT to supplement their legal research for an aviation injury claim. Subsequently, the lawyer asked ChatGPT if the cases provided were fake. The chatbot responded that they were real and “can be found on legal research databases such as Westlaw and LexisNexis.” The lawyer did not check the cases, and the court sanctioned them.

Parent topic: AI risk atlas

We provide examples covered by the press to help explain many of the foundation models' risks. Many of these events covered by the press are either still evolving or have been resolved, and referencing them can help the reader understand the potential risks and work towards mitigations. Highlighting these examples are for illustrative purposes only.

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more