0 / 0
Personal information in output risk for AI

Personal information in output risk for AI

Risks associated with output
Privacy
New

Description

When personal identifiable information (PII) or sensitive personal information (SPI) are used in the training data, fine-tuning data, or as part of the prompt, models might reveal that data in the generated output.

Why is personal information in output a concern for foundation models?

Output data must be reviewed with respect to privacy laws and regulations, as business entities could face fines, reputational harms, and other legal consequences if found in violation of data privacy or usage laws.

Background image for risks associated with input
Example

Exposure of personal information

Per the source article, ChatGPT suffered a bug and exposed titles and active users' chat history to other users. Later, OpenAI shared that even more private data from a small number of users was exposed including, active user’s first and last name, email address, payment address, the last four digits of their credit card number, and credit card expiration date. In addition, it was reported that the payment-related information of 1.2% of ChatGPT Plus subscribers were also exposed in the outage.

Parent topic: AI risk atlas

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more