0 / 0
Confidential data in prompt risk for AI

Confidential data in prompt risk for AI

Risks associated with input
Inference
Intellectual property
New

Description

Inclusion of confidential data as a part of a generative model's prompt, either through the system prompt design or through the inclusion of end user input, might later result in unintended reuse or disclosure of that information.

Why is confidential data in prompt a concern for foundation models?

If not properly developed to secure confidential data, the model might expose confidential information or IP in the generated output. Additionally, end users' confidential information might be unintentionally collected and stored.

Background image for risks associated with input
Example

Disclosure of Confidential Information

As per the source article, employees of Samsung disclosed confidential information to OpenAI through their use of ChatGPT. In one instance, an employee pasted confidential source code to check for errors. In another, an employee shared code with ChatGPT and "requested code optimization." A third shared a recording of a meeting to convert into notes for a presentation. Samsung has limited internal ChatGPT usage in response to these incidents, but it is unlikely that they will be able to recall any of their data. Additionally, that article highlighted that in response to the risk of leaking confidential information and other sensitive information, companies like Apple, JPMorgan Chase. Deutsche Bank, Verizon, Walmart, Samsung, Amazon, and Accenture have placed several restrictions on the usage of ChatGPT.

Parent topic: AI risk atlas

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more