0 / 0
Toxicity risk for AI

Toxicity risk for AI

Risks associated with output
Misuse
New

Description

Toxicity is the possibility that a model could be used to generate toxic, hateful, abusive, or aggressive content.

Why is toxicity a concern for foundation models?

Intentionally spreading toxic, hateful, abusive, or aggressive content is unethical and can be illegal. Recipients of such content might face more serious harms. A model that has this potential must be properly governed. Otherwise, business entities could face fines, reputational harms, and other legal consequences.

Parent topic: AI risk atlas

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more