0 / 0
Hallucination risk for AI

Hallucination risk for AI

Risks associated with output
Value alignment
New

Description

Hallucinations occur when models produce factually inaccurate or untruthful information. Often, hallucinatory output is presented in a plausible or convincing manner, making detection by end users difficult.

Why is hallucination a concern for foundation models?

False output can mislead users and be incorporated into downstream artifacts, further spreading misinformation. This can harm both owners and users of the AI models. Business entities could face fines, reputational harms, and other legal consequences.

Background image for risks associated with input
Example

Fake Legal Cases

According to the source article, a lawyer cited fake cases and quotes generated by ChatGPT in a legal brief filed in federal court. The lawyers consulted ChatGPT to supplement their legal research for an aviation injury claim. The lawyer subsequently asked ChatGPT if the cases provided were fake. The chatbot responded that they were real and “can be found on legal research databases such as Westlaw and LexisNexis.”

Parent topic: AI risk atlas

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more