0 / 0
Membership inference attack risk for AI

Membership inference attack risk for AI

Risks associated with input
Inference
Privacy
Traditional AI risk

Description

Given a trained model and a data sample, an attacker appropriately samples the input space, observing outputs to deduce whether that sample was part of the model's training. This is known as a membership inference attack.

Why is membership inference attack a concern for foundation models?

Identifying whether a data sample was used for training data can reveal what data was used to train a model, possibly giving competitors insight into how a model was trained and the opportunity to replicate the model or tamper with it.

Parent topic: AI risk atlas

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more