0 / 0
Attribute inference attack risk for AI

Attribute inference attack risk for AI

Risks associated with input
Inference
Privacy
Amplified by generative AI

Description

An attribute inference attack is used to detect whether certain sensitive features can be inferred about individuals who participated in training a model. These attacks occur when an adversary has some prior knowledge about the training data and uses that knowledge to infer the sensitive data.

Why is attribute inference attack a concern for foundation models?

With a successful attack, the attacker can gain valuable information such as sensitive personal information or intellectual property.

Parent topic: AI risk atlas

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more