0 / 0
Data privacy rights alignment risk for AI
Last updated: Dec 12, 2024
Data privacy rights alignment risk for AI
Privacy Icon representing privacy risks.
Risks associated with input
Training and tuning phase
Privacy
Amplified by generative AI

Description

Existing laws could include providing data subject rights such as opt-out, right to access, and right to be forgotten.

Why is data privacy rights alignment a concern for foundation models?

Improper usage or a request for data removal could force organizations to retrain the model, which is expensive.

Background image for risks associated with input
Example

Right to Be Forgotten (RTBF)

Laws in multiple locales, including Europe (GDPR), grant data subjects the right to request personal data to be deleted by organizations (‘Right To Be Forgotten’, or RTBF). However, emerging, and increasingly popular large language model (LLM) -enabled software systems present new challenges for this right. According to research by CSIRO’s Data61, data subjects can identify usage of their personal information in an LLM “by either inspecting the original training data set or perhaps prompting the model.” However, training data might not be public, or companies do not disclose it, citing safety and other concerns. Guardrails might also prevent users from accessing the information by prompting. Due to these barriers, data subjects might not be able to initiate RTBF procedures and companies that deploy LLMs might not be able to meet RTBF laws.

Background image for risks associated with input
Example

Lawsuit About LLM Unlearning

According to the report, a lawsuit was filed against Google that alleges the use of copyright material and personal information as training data for its AI systems, which includes its Bard chatbot. Opt-out and deletion rights are guaranteed rights for California residents under the CCPA and children in the United States under the age of 13 with COPPA. The plaintiffs allege that because there is no way for Bard to “unlearn” or fully remove all the scraped PI it has been fed. The plaintiffs note that Bard’s privacy notice states that Bard conversations cannot be deleted by the user after they have been reviewed and annotated by the company and might be kept up to 3 years. Plaintiffs allege that these practices further contribute to noncompliance with these laws.

Parent topic: AI risk atlas

We provide examples covered by the press to help explain many of the foundation models' risks. Many of these events covered by the press are either still evolving or have been resolved, and referencing them can help the reader understand the potential risks and work towards mitigations. Highlighting these examples are for illustrative purposes only.

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more