0 / 0
Non-disclosure risk for AI

Non-disclosure risk for AI

Risks associated with output
Misuse
New

Description

Not disclosing that content is generated by an AI model is the risk of non-disclosure.

Why is non-disclosure a concern for foundation models?

Not disclosing the AI-authored content reduces trust and is deceptive. Intention deception might result in fines, reputational harms, and other legal consequences.

Background image for risks associated with input
Example

Undisclosed AI Interaction

As per the source, an online emotional support chat service ran a study to augment or write responses to around 4,000 users using GPT-3 without informing users. The co-founder faced immense public backlash about the potential for harm caused by AI generated chats to the already vulnerable users. He claimed that the study was "exempt" from informed consent law.

Parent topic: AI risk atlas

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more