How it works
The feedback loop that changes how you think
Metacognition is only trained through precise, immediate feedback on the gap between what you believed and what turned out to be true. MindFrame is built around five feedback loops — one for each dimension of metacognitive skill.
The session flow
Every MindFrame session follows this exact loop. Each step is a deliberate metacognitive act.
Choose a training mode
Metacognitive planningYou decide where you want to train your thinking today. This is the first metacognitive act of every session — selecting the type of thinking you want to exercise. Choosing Bias Hunter signals intent to work on pattern recognition. Choosing Calibration Lab means you are focused on confidence accuracy. The choice is deliberate and it shapes what your session measures.
Answer and rate your confidence
The metacognitive momentBefore seeing any result, you state how certain you are — on a scale from 50% (pure guess) to 99% (near certain). This is the most important step in the system. By committing your self-assessment before feedback can bias it, MindFrame captures a clean signal: what you actually believed in the moment you made your decision. This is the data point that most training tools never collect.
Confidence scale
AI scores your reasoning
Not just right or wrongGetting the right answer is the floor, not the ceiling. An AI reads your written reasoning and scores the quality of your argument — identifying logical gaps, unsupported jumps, irrelevant evidence, and sound inference. Two people can get the same answer with wildly different reasoning quality. MindFrame tracks both, because the quality of your reasoning determines whether your good results are reproducible.
See your calibration gap
The number most people never seeThe distance between your confidence and your accuracy is your Calibration Error. If you said 85% confident and you are right 65% of the time on questions like that, your calibration error is 20 points. This is the number that most people have never seen about themselves — and it is the most important number in MindFrame. Reducing it is the primary training objective.
Stated confidence
85%
Actual accuracy
65%
Calibration Error
20 pts
Session debrief and next move
Precision coaching, not generic tipsClaude analyzes your full session — every answer, every confidence rating, every reasoning trace. It identifies your weakest dimension across the five metacognitive axes, surfaces the pattern behind your errors, and tells you exactly what to work on in your next session. This is not a generic tips list. It is a specific diagnosis of what your thinking did in this session.
The scoring system
Three layers, not one
Every other training platform scores one thing: whether you got it right. MindFrame scores three — because the other two layers are where improvement actually happens.
Outcome Score
Did you get it right?
The baseline — were your answers correct? This matters, but it is the least interesting of the three layers. Many people get answers right for the wrong reasons and fail to reproduce that success under slightly different conditions.
Reasoning Score
How did you get there?
AI reads your written reasoning and scores the logical structure of your argument. Valid evidence, sound inference, identification of relevant factors, absence of unsupported leaps. The score tells you whether your thinking process is sound — independent of whether the answer happened to be correct.
Self-awareness Score
How well did you know what you knew?
Your confidence ratings versus your actual accuracy. The gap between these two numbers is your Calibration Error. A self-awareness score of 90+ means your confidence is an accurate predictor of your performance. Most untrained people score below 70 on this layer.
Example session result
78%
7 of 9 correct
64 / 100
AI-scored
71 / 100
Calibration accuracy
The calibration metric
What is the Brier Score?
The Brier Score is the scientific standard for measuring probability calibration. Here is an analogy that makes it concrete.
Forecaster A
The overconfident one
Says “60% chance of rain” every single day. Accurate roughly 60% of the time. Sounds reasonable — but their forecasts contain almost no information. They never distinguish between a clear summer day and a building storm front.
High Brier Score — poor calibration. Predictions don't distinguish between certain and uncertain outcomes.
Forecaster B
The calibrated one
Says “95% chance of rain” when a storm is imminent, “5% chance” on a cloudless day. Both forecasters might be right the same number of times — but Forecaster B tells you something useful every time they speak.
Low Brier Score — excellent calibration. Predictions carry real information and update with evidence.
In MindFrame: your Brier Score is computed across every confidence rating you make in a session. A Brier Score of 0 is perfect calibration. A score of 1.0 means you were maximally wrong about how certain you were. Most untrained people start between 0.15 and 0.30. MindFrame tracks your improvement across sessions.
The long game
How MindFrame compounds over time
A single session shows you the gap. Repeated sessions reveal the pattern. Consistent practice produces a measurable, permanent shift in how accurately you know what you know.
You see the gap
Your first calibration score. Most people discover they are 15–25 points more confident than their accuracy warrants. This number is the baseline everything else is measured against.
You see the pattern
Five sessions generates enough data to surface a pattern — which types of questions you systematically overestimate, which modes expose your weakest dimensions, where your reasoning breaks down.
You have a cognitive fingerprint
Your scores across all five dimensions form a stable profile. Your Cognitive Fingerprint — shareable, measurable, specific to you. And your calibration error is meaningfully lower than session 1.
Five dimensions
What MindFrame actually trains
Each dimension is independently measured, independently scored, and independently coached — because they can diverge. You can be highly calibrated but reason poorly. You can reason well but fail to update beliefs.
Calibration
Matching confidence to reality
The ability to know how likely you are to be right before you get the answer. High calibration means your 80% feels like 80%, not 60% or 95%. Measured via Brier Score and calibration error across sessions.
Reasoning Quality
The logic behind your conclusions
The structural validity of your arguments. Good reasoners identify the right evidence, weigh it correctly, and avoid unsupported jumps. AI-scored on every session.
Bias Recognition
Catching systematic errors before they compound
The ability to spot cognitive biases — confirmation bias, availability heuristic, anchoring, framing effects — in your own thinking and in presented arguments. Trained through Bias Hunter and related modes.
Belief Updating
Changing your mind in proportion to evidence
The ability to revise your beliefs when evidence warrants it — neither too much nor too little. Trained through Contradiction Challenge and Memory of Thought modes.
Working Memory Utilization
Using your attention efficiently
How effectively you use limited attentional resources. High utilization means you hold the right information at the right time, avoid cognitive overload, and structure complex problems well.
All five together
Your Cognitive Fingerprint
After 20 sessions, your scores across all five dimensions form a stable profile — your Cognitive Fingerprint. Shareable, measurable, specific to you.
Start your first session
See your calibration gap — free
Your first session takes four minutes. You will see your Brier Score, your Calibration Error, and an AI debrief of your reasoning. No credit card required.
One challenge. One confidence rating. No sign-up.
This is step 2 of the session loop — the attempt + confidence capture. Experience the thing every explanation above is talking about.