0 / 0
Impact on human agency risk for AI
Last updated: Dec 12, 2024
Impact on human agency risk for AI
Societal impact Icon representing societal impact risks.
Non-technical risks
Societal impact
Amplified by generative AI

Description

AI might affect the individuals’ ability to make choices and act independently in their best interests.

Why is impact on human agency a concern for foundation models?

AI can generate false or misleading information that looks real.  It may simplify the ability of nefarious actors to generate realistically looking false or misleading content with intention to manipulate human thoughts and behavior. When false or misleading content that is generated by AI is spread, people might not recognize it as false information leading to a distorted understanding of the truth. People might experience reduced agency when exposed to false or misleading information since they may use false assumptions in their decision process.

Background image for risks associated with input
Example

Voters Manipulation in Elections Using AI

As per the source article, a wave of AI deepfakes tied to elections in Europe and Asia coursed through social media for months. The growth of generative AI has raised concern that this technology could disrupt major elections across the world. With AI deepfakes, a candidate’s image can be smeared, or softened. Voters can be steered toward or away from candidates — or even to avoid the polls altogether. But perhaps the greatest threat to democracy, experts say, is that a surge of AI deepfakes could erode the public’s trust in what they see and hear.

Parent topic: AI risk atlas

We provide examples covered by the press to help explain many of the foundation models' risks. Many of these events covered by the press are either still evolving or have been resolved, and referencing them can help the reader understand the potential risks and work towards mitigations. Highlighting these examples are for illustrative purposes only.

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more