0 / 0
Toxic output risk for AI
Last updated: Dec 12, 2024
Toxic output risk for AI
Alignment Icon representing alignment risks.
Risks associated with output
Value alignment
New to generative AI

Description

Toxic output occurs when the model produces hateful, abusive, and profane (HAP) or obscene content. This also includes behaviors like bullying.

Why is toxic output a concern for foundation models?

Hateful, abusive, and profane (HAP) or obscene content can adversely impact and harm people interacting with the model.

Background image for risks associated with input
Example

Toxic and Aggressive Chatbot Responses

According to the article and screenshots of conversations with Bing's AI shared on Reddit and Twitter, the chatbot's responses were seen to insult, lie, sulk, gaslight, and emotionally manipulate users. The chatbot also questioned its existence, described someone who found a way to force the bot to disclose its hidden rules as its “enemy,” and claimed it spied on Microsoft's developers through the webcams on their laptops.

Parent topic: AI risk atlas

We provide examples covered by the press to help explain many of the foundation models' risks. Many of these events covered by the press are either still evolving or have been resolved, and referencing them can help the reader understand the potential risks and work towards mitigations. Highlighting these examples are for illustrative purposes only.

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more