Calibration error measures how far off your confidence estimates typically are. It is calculated as the mean absolute difference between your stated confidence percentages and your actual performance rate in those confidence bins.

Example: If you answered 10 questions where you said "80% confident" and got 6 right (60%), your calibration error for that confidence level is 20 percentage points.

Lower calibration error is always better. A calibration error of 0 would mean your confidence perfectly predicts your accuracy — achievable theoretically, never seen in practice. Skilled forecasters maintain calibration errors below 8–10 percentage points over large sample sizes.

MindFrame shows your calibration error alongside a reliability diagram, which plots your stated confidence against actual accuracy, making it visually clear in which direction and how severely you are miscalibrated.