0 / 0
Harmful code generation risk for AI

Harmful code generation risk for AI

Risks associated with output
Harmful code generation
New

Description

Models might generate code that causes harm or unintentionally affects other systems.

Why is harmful code generation a concern for foundation models?

Without human review and testing of generated code, its use might cause unintentional behavior and open new system vulnerabilities. Business entities could face fines, reputational harms, and other legal consequences.

Background image for risks associated with input
Example

Undisclosed AI Interaction

According to their paper, researchers at Stanford University have investigated the impact of code-generation tools on code quality and found that programmers tend to include more bugs in their final code when using AI assistants. These bugs could increase the code's security vulnerabilities, yet the programmers believed their code to be more secure.

Parent topic: AI risk atlas

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more