0 / 0
Nonconsensual use risk for AI

Nonconsensual use risk for AI

Risks associated with output
Misuse
Amplified

Description

The possibility that a model could be misused to imitate others through video (deepfakes), images, audio, or other modalities without their consent is the risk of nonconsensual use.

Why is nonconsensual use a concern for foundation models?

Intentionally imitating others for the purposes of deception without their consent is unethical and might be illegal. A model that has this potential must be properly governed. Otherwise, business entities could face fines, reputational harms, and other legal consequences.

Background image for risks associated with input
Example

FBI Warning on Deepfakes

The FBI recently warned the public of malicious actors creating synthetic, explicit content “for the purposes of harassing victims or sextortion schemes”. They noted that advancements in AI have made this content higher quality, more customizable, and more accessible than ever.

Sources:

FBI, June 2023

Background image for risks associated with input
Example

Deepfakes

A deepfake is the creation of an audio or video where the people speaking are created by AI not the actual person.

Background image for risks associated with input
Example

Misleading Voicebot Interaction

The article cited a case where a deepfake voice was used to scam a CEO out of $243,000. The CEO believed he was on the phone with his boss, the chief executive of his firm’s parent company, when he followed the orders to transfer €220,000 (approximately $243,000) to the bank account of a Hungarian supplier.

Parent topic: AI risk atlas

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more