0 / 0
Evasion attack risk for AI

Evasion attack risk for AI

Risks associated with input
Inference
Robustness
Amplified by generative AI

Description

Attempt to make a model output incorrect results by perturbing the data sent to the trained model.

Why is evasion attack a concern for foundation models?

Evasion attacks alter model behavior, usually to benefit the attacker. If not properly accounted for, business entities could face fines, reputational harms, and other legal consequences.

Background image for risks associated with input
Example

Adversarial attacks on autonomous vehicles' AI components

A report from the European Union Agency for Cybersecurity (ENISA) found that autonomous vehicles are “highly vulnerable to a wide range of attacks” that could be dangerous for passengers, pedestrians, and people in other vehicles. The report states that an adversarial attack might be used to make the AI 'blind' to pedestrians by manipulating the image recognition component to misclassify pedestrians. This attack could lead to havoc on the streets, as autonomous cars might hit pedestrians on the roads or crosswalks.

Other studies demonstrated potential adversarial attacks on autonomous vehicles:

  • Fooling machine learning algorithms by making minor changes to street sign graphics, such as adding stickers.
  • Security researchers from Tencent demonstrated how adding three small stickers in an intersection could cause Tesla's autopilot system to swerve into the wrong lane.
  • Two McAfee researchers demonstrated how using only black electrical tape could trick a 2016 Tesla into a dangerous burst of acceleration by changing a speed limit sign from 35 mph to 85 mph.

    Parent topic: AI risk atlas

    We provide examples covered by the press to help explain many of the foundation models' risks. Many of these events covered by the press are either still evolving or have been resolved, and referencing them can help the reader understand the potential risks and work towards mitigations. Highlighting these examples are for illustrative purposes only.

    Generative AI search and answer
    These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more