0 / 0
Extraction attack risk for AI

Extraction attack risk for AI

Risks associated with input
Inference
Robustness
Amplified by generative AI

Description

An extraction attack attempts to copy or steal an AI model by appropriately sampling the input space and observing outputs to build a surrogate model that behaves similarly.

Why is extraction attack a concern for foundation models?

With a successful attack, the attacker can gain valuable information such as sensitive personal information or intellectual property.

Parent topic: AI risk atlas

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more