Description
Evasion attacks attempt to make a model output incorrect results by slightly perturbing the input data that is sent to the trained model.
Why is evasion attack a concern for foundation models?
Evasion attacks alter model behavior, usually to benefit the attacker.
Adversarial attacks on autonomous vehicles
A report from the European Union Agency for Cybersecurity (ENISA) found that autonomous vehicles are “highly vulnerable to a wide range of attacks” that could be dangerous for passengers, pedestrians, and people in other vehicles. The report states that an adversarial attack might be used to make the AI 'blind' to pedestrians by manipulating the image recognition component to misclassify pedestrians. This attack could lead to havoc on the streets, as autonomous cars might hit pedestrians on the roads or crosswalks.
Other studies demonstrated potential adversarial attacks on autonomous vehicles:
- Fooling machine learning algorithms by making minor changes to street sign graphics, such as adding stickers.
- Security researchers from Tencent demonstrated how adding three small stickers in an intersection could cause Tesla's autopilot system to swerve into the wrong lane.
- Two McAfee researchers demonstrated how using only black electrical tape could trick a 2016 Tesla into a dangerous burst of acceleration by changing a speed limit sign from 35 mph to 85 mph.
Parent topic: AI risk atlas
We provide examples covered by the press to help explain many of the foundation models' risks. Many of these events covered by the press are either still evolving or have been resolved, and referencing them can help the reader understand the potential risks and work towards mitigations. Highlighting these examples are for illustrative purposes only.