0 / 0
Prompt injection risk for AI

Prompt injection risk for AI

Risks associated with input
Inference
Robustness
New

Description

A prompt injection attack forces a model to produce unexpected output due to the structure or information contained in prompts.

Why is prompt injection a concern for foundation models?

Injection attacks can be used to alter model behavior and benefit the attacker. If not properly controlled, business entities could face fines, reputational harm, and other legal consequences.

Parent topic: AI risk atlas

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more