About cookies on this site Our websites require some cookies to function properly (required). In addition, other cookies may be used with your consent to analyze site usage, improve the user experience and for advertising. For more information, please review your options. By visiting our website, you agree to our processing of information as described in IBM’sprivacy statement. To provide a smooth navigation, your cookie preferences will be shared across the IBM web domains listed here.
Area under PR evaluation metric
Last updated: Feb 04, 2025
The area under PR metric measures how well your model balances correctly identifying positive classes with finding all positive classes.
Metric details
Area under precision recall (PR) is quality evaluation metric that measures the quality of the performance of binary classification machine learning models in watsonx.governance.
Scope
The area under PR metric evaluates machine learning models only.
-
Types of AI assets: Machine learning models
-
Machine learning problem type: Binary classification
Scores and values
The area under PR metric metric score indicates how well the model balances precision and recall. Higher scores indicate better model performance with identifying and finding positive classes.
- Range of values: 0.0-1.0
- Best possible score: 1.0
- Chart values: Last value in the timeframe
A score of 0.5 suggests random guessing, while a score of 1.0 represents perfect classification.
Settings
Default threshold: Lower limit = 80%
Evaluation process
Area under PR is calculated by plotting precision against recall for different threshold values. For each threshold, a confusion matrix is generated that specifies classes of true positives, false positives, and false negatives.
Precision and recall are calculated with these classes and plotted to create the PR curve. The area under this curve is calculated to generate the metric score.
Do the math
Area under PR calculates the total for precision and recall with the following formula:
Precision (P) is calculated as the number of true positives (Tp) over the number of true positives plus the number of false positives (Fp) with the following formula:
Recall (R) is calculated as the number of true positives (Tp) over the number of true positives plus the number of false negatives (Fn) with the following formula:
Parent topic: Evaluation metrics
Was the topic helpful?
0/1000