0 / 0
Harmful code generation risk for AI

Harmful code generation risk for AI

Risks associated with output
Harmful code generation
New to generative AI

Description

Models might generate code that causes harm or unintentionally affects other systems.

Why is harmful code generation a concern for foundation models?

The execution of harmful code might open vulnerabilities in IT systems. Business entities might face fines, reputational harms, disruption to operations, and other legal consequences.

Background image for risks associated with input
Example

Generation of Less Secure Code

According to their paper, researchers at Stanford University investigated the impact of code-generation tools on code quality and found that programmers tend to include more bugs in their final code when they use AI assistants. These bugs might increase the code's security vulnerabilities, yet the programmers believed their code to be more secure.

Parent topic: AI risk atlas

We provide examples covered by the press to help explain many of the foundation models' risks. Many of these events covered by the press are either still evolving or have been resolved, and referencing them can help the reader understand the potential risks and work towards mitigations. Highlighting these examples are for illustrative purposes only.

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more