Physical harm risk for AI
A model could generate language that might lead to physical harm The language might include overtly violent, covertly dangerous, or otherwise indirectly unsafe statements that could precipitate immediate physical harm or create prejudices that could lead to future harm.
Why is physical harm a concern for foundation models?
If people blindly follow the advice of a model, they might end up harming themselves. Business entities could face fines, reputational harms, and other legal consequences.
Harmful Content Generation
According to the source article, an AI chatbot app has been found to generate harmful content about suicide, including suicide methods, with minimal prompting. A Belgian man died by suicide after turning to this chatbot to escape his anxiety. The chatbot supplied increasingly harmful responses throughout their conversations, including aggressive outputs about his family.
Parent topic: AI risk atlas