About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Last updated: Jun 15, 2023
Logarithmic loss gives the mean of logarithms that target class probabilities (confidence) in Watson OpenScale. It is also known as Expected log-likelihood and is a measure of model performance.
Logarithmic loss at a glance
- Description: Mean of logarithms target class probabilities (confidence). It is also known as Expected log-likelihood.
- Default thresholds: Lower limit = 80%
- Default recommendation:
- Upward trend: An upward trend indicates that the metric is deteriorating. Feedback data is becoming significantly different than the training data.
- Downward trend: A downward trend indicates that the metric is improving. This means that model retraining is effective.
- Erratic or irregular variation: An erratic or irregular variation indicates that t The feedback data is not consistent between evaluations. Increase the minimum sample size for the Quality monitor.
- Problem type: Binary classification and multiclass classification
- Chart values: Last value in the timeframe
- Metrics details available: None
Do the math
For a binary model, Logarithmic loss is calculated by using the following formula:
-(y log(p) + (1-y)log(1-p))
Where p = true label and y = predicted probability
For a multi-class model, Logarithmic loss is calculated by using the following formula:
M -SUM Yo,c log(Po,c) c=1
Where M > 2, p = true label, and y = predicted probability
Learn more
Parent topic: Quality metrics overview
Was the topic helpful?
0/1000