MindFrame
Evidence-Based

The Research Behind MindFrame

Every training mode, scoring metric, and feedback mechanism in MindFrame is grounded in peer-reviewed research on metacognition, calibration, and cognitive skill development.

Effect Size Evidence Table

What is g? g (Cohen's g), d, and ES are all standardised effect sizes — a universal ruler that lets researchers compare results across different studies and populations. Think of it as a percentile gap: g = 0.63 means a person at the 50th percentile in the trained group would outscore 73% of untrained people.

Scale: 0.20 = small  ·  0.50 = medium  ·  0.80 = large  ·  1.0+ = very large. Values above 0.40 are considered educationally significant (Hattie, 2009).

InterventionEffectEvidence
Metacognitive Trainingg = 0.63Trained people outscore 73% of untrained — beats homework, class-size cuts, and most tutoringhigh
Metacognitive InstructionES ≈ 1.11Trained people outscore ~86% of untrained — among the strongest effects in cognitive traininghigh
Metacognitive Therapy (MCT)g = 0.69Trained people outscore ~75% of CBT patients — outperforms the gold standard for anxietyhigh
Calibration Training+14%Consistent, measurable accuracy gain from structured practicehigh
Working Memory Trainingg ≈ 0.28Small-to-medium — modest generalisation beyond trained tasksmedium
Spaced Repetitiond = 0.47–0.71Medium-to-large — strong durable memory across 254 studieshigh
Error Monitoring TrainingSignificantConsistent improvement in decision accuracy across multiple RCTsmedium

Training Principles

The five mechanisms through which MindFrame produces measurable improvement.

01

Calibrated confidence, not just accuracy

Getting an answer right is not enough. Knowing when you're right versus guessing — and assigning the correct probability — is what distinguishes expert decision-makers from lucky ones. Every MindFrame challenge requires you to state your confidence alongside your answer.

Brier Score + Calibration Error
02

Immediate, precise feedback

Vague feedback ("good job") produces no improvement. Improvement requires specific information about where you deviated from ideal performance. MindFrame gives you percentile rankings, calibration error breakdown, and mode-level analytics after every session.

Composite Score, Mode Breakdown, Percentile Rank
03

Spaced repetition scheduling

The forgetting curve is real. A single exposure to a concept produces brief retention. Reviewing at increasing intervals forces retrieval practice, which produces durable memory and skill. MindFrame uses SM-2 scheduling to surface the right challenge at the right time.

SM-2 adaptive scheduling
04

Reasoning quality evaluation

Correct answers reached through faulty reasoning don't transfer to novel situations. MindFrame's AI coach evaluates the logical structure and quality of your reasoning, not just whether your final answer was right.

AI Reasoning Score
05

Reflective consolidation

Research on learning shows that post-session reflection significantly increases knowledge transfer. After every session, MindFrame prompts you to identify your biggest error and the strategy that would have prevented it.

Session Journal

Mode × Research Map

How each MindFrame training mode maps to a specific cognitive skill and research base.

Calibration

Probability estimation

Reduces overconfidence by training confidence-accuracy match

Fischhoff et al. 1977; Tetlock 2015
Reasoning

Analytical reasoning

Improves argument evaluation and logical validity detection

Stanovich 2016
Bias Detective

Bias recognition

Reduces susceptibility by increasing bias fluency

Kahneman 2011; Nickerson 1998
Working Memory

Attentional control

Increases capacity and resistance to distraction

Basak et al. 2008; Cowan 2001
Contradiction

Cognitive flexibility

Trains perspective-shifting and belief updating under conflict

Diamond 2013
Scenarios

Belief updating

Trains proportional response to new evidence

Tetlock & Gardner 2015
Daily Challenge

All-mode integration

Cross-domain calibration across all 5 skill areas

Compound training protocols

Put the evidence to work

Effect sizes only become results when paired with consistent practice. Start a session and get your first calibration baseline in 10 minutes.