0 / 0
Trust calibration risk for AI

Trust calibration risk for AI

Risks associated with output
Value alignment


Trust calibration presents problems when a person places too little or too much trust in an AI model's guidance, resulting in poor decision making.

Why is trust calibration a concern for foundation models?

In tasks where humans make choices based on AI-based suggestions, consequences of poor decision making increase with the importance of the decision. Bad decisions can harm users and can lead to financial harm, reputational harm, and other legal consequences for business entities.

Parent topic: AI risk atlas

Generative AI search and answer
These answers are generated by a large language model in watsonx.ai based on content from the product documentation. Learn more