0 / 0
Extraction attack risk for AI

Extraction attack risk for AI

Risks associated with input
Inference
Robustness
Amplified

Description

An attack that attempts to copy or steal the AI model by appropriately sampling the input space, observing outputs, and building a surrogate model, is known as an extraction attack.

Why is extraction attack a concern for foundation models?

A successful attack mimics the model, enabling the attacker to repurpose it for their benefit such as eliminating a competitive advantage or causing reputational harm.

Parent topic: AI risk atlas

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more