0 / 0
Nonconsensual use risk for AI
Last updated: Dec 12, 2024
Nonconsensual use risk for AI
Misuse Icon representing misuse risks.
Risks associated with output
Misuse
Amplified by generative AI

Description

Generative AI models might be intentionally used to imitate people through deepfakes by using video, images, audio, or other modalities without their consent.

Why is nonconsensual use a concern for foundation models?

Deepfakes can spread disinformation about a person, possibly resulting in a negative impact on the person’s reputation. A model that has this potential must be properly governed.

Background image for risks associated with input
Example

FBI Warning on Deepfakes

The FBI recently warned the public of malicious actors creating synthetic, explicit content “for the purposes of harassing victims or sextortion schemes”. They noted that advancements in AI made this content higher quality, more customizable, and more accessible than ever.

Sources:

FBI, June 2023

Background image for risks associated with input
Example

Audio Deepfakes

According to the source article, the Federal Communications Commission outlawed robocalls that contain voices that are generated by artificial intelligence. The announcement came after AI-generated robocalls mimicked the President's voice to discourage people from voting in the state's first-in-the-nation primary.

Parent topic: AI risk atlas

We provide examples covered by the press to help explain many of the foundation models' risks. Many of these events covered by the press are either still evolving or have been resolved, and referencing them can help the reader understand the potential risks and work towards mitigations. Highlighting these examples are for illustrative purposes only.

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more