Features
Everything MindFrame does — and why each part matters
MindFrame is not a quiz app with a score. Every feature is designed around a specific gap in how people think — and a precise mechanism for closing it.
Core training
15 training modes. One for every thinking gap.
Each mode targets a specific metacognitive dimension. Foundation modes are free. Advanced and Elite modes unlock with Pro.
Bias Hunter
Challenges present real-world arguments containing embedded cognitive biases. You identify the bias type and explain why the reasoning is distorted.
Calibration Lab
Answer questions across a range of domains and rate your confidence on each. The lab measures the gap between your certainty and your accuracy over time.
Reframe Forge
Given a distorted belief or cognitive trap, reframe it into a grounded, evidence-based alternative. AI scores the quality of your reframe.
Decision Lab
Structured decision scenarios with incomplete information. You map options, identify missing data, and state your uncertainty explicitly before choosing.
Reflection Sprint
Guided metacognitive reflection on a recent decision or belief you hold. Structured prompts surface the reasoning you actually used vs. the reasoning you thought you used.
Emotional Radar
Recognise how emotional state shapes your judgment. Scenarios present decisions under stress, excitement, or social pressure — you identify the emotional distortion.
Prediction Arena
Make predictions about real-world events with confidence ratings. MindFrame tracks outcomes and computes your accuracy over time via Brier Score.
Contradiction Challenge
Given two positions you have previously held (or two premises in an argument), identify the contradiction and resolve it with evidence. AI scores the logical validity of your resolution.
Signal vs Noise
Complex information environments with irrelevant data mixed into relevant signals. You separate what matters from what distracts — under time pressure.
Memory of Thought
Track how your reasoning evolves as new information arrives across a multi-part challenge. Measures your actual belief-updating rate against the Bayesian ideal.
Uncertainty Engine
Genuinely ambiguous problems with no correct answer. You navigate them, explain your reasoning, and calibrate your confidence to the actual ambiguity of the situation.
Metacognitive Scan
Full-session review of your thinking patterns from prior sessions. You analyse your own cognitive history, identify patterns, and predict where you will fail next.
Strategy Selector
Given a novel problem type, identify which cognitive strategy is optimal before attempting it. Trains metacognitive planning — choosing how to think before you think.
Thinking Trap Escape
You are shown a reasoning path that leads to a trap — a subtle cognitive error. Identify where it goes wrong and construct the escape route.
The scoring system
3-layer scoring — because right/wrong is the floor
Every session produces three scores. Each one measures something the others miss.
Session result
Bias Hunter — Advanced
01 — Outcome
78%
7 of 9 correct
Whether you got it right. The floor, not the ceiling.
02 — Reasoning
64 / 100
Logical structure
Quality of your argument, not just the answer.
03 — Self-awareness
71 / 100
Confidence accuracy
How well your confidence matched your accuracy.
Composite Score
Weighted across all three layers
71 / 100
Example session result. Your scores are tracked and trended across every session.
AI Coaching
Claude analyzes every session
After each session, Claude reads your full session log — every answer, every confidence rating, every reasoning trace — and produces a personalized debrief.
This is not a generic tips list. It identifies the specific pattern behind your errors in this session, names your weakest dimension, and prescribes the exact next mode to train.
- Names your weakest metacognitive dimension from this session
- Surfaces the pattern in your errors — not just the errors themselves
- Prescribes the specific mode to train next
- Tracks whether prior session prescriptions improved your scores
AI Coaching Debrief
Session 7 — Bias Hunter
Cognitive Twin — live model
Predicted blind spot: Overconfidence in domains with strong prior beliefs. Watch Calibration Lab and Contradiction Challenge results.
Cognitive Twin
Your evolving thinking model
The Cognitive Twin is a live model of how you think — built from your session history and updated after every training session. It tracks your five metacognitive dimensions over time, not just today's score.
As the model matures, it begins to predict your blind spots — the specific question types, domains, and emotional conditions where your calibration tends to break down. These predictions sharpen the AI coaching recommendations.
- Tracks all 5 dimensions independently across your full session history
- Surfaces persistent patterns, not just recent sessions
- Predicts your blind spots before you encounter them
- Updates after every session — no manual input
Cognitive Fingerprint
A shareable map of how you think
After 20 sessions, your scores across all five dimensions stabilise into a recognisable pattern — your Cognitive Fingerprint. It is specific to you, built from your actual performance data, and updated as you improve.
The Fingerprint is a shareable public profile badge — a single-image visualisation of your thinking profile that you can attach to your professional bio, portfolio, or LinkedIn.
- Radar chart across all 5 metacognitive dimensions
- Updated automatically as your scores change
- Public profile badge — shareable link and image export
- Percentile ranking within the MindFrame Hive
Cognitive Fingerprint
Built from 24 sessions
72
Calib.
64
Reason
81
Bias
69
Belief
75
Memory
Prediction Journal
Track what you predicted and whether you were right
Log predictions about real-world events with confidence ratings and target resolution dates. MindFrame timestamps each prediction and surfaces it when the resolution date arrives. Accuracy is computed automatically via Brier Score.
- Timestamped entries with confidence level
- Automatic outcome resolution reminders
- Cumulative Brier Score across all logged predictions
- Topic-level accuracy breakdown
Forecasting Arena
Compete on calibration — not just accuracy
The Forecasting Arena presents shared prediction challenges to all Pro members. The leaderboard ranks by Brier Score — so the winner is the most calibrated forecaster, not the most confident one. Compete with others on the dimension that actually matters.
- Shared challenges — same question for all participants
- Ranked by Brier Score, not just % correct
- Weekly and all-time leaderboards
- Topic specialisations (geopolitics, technology, markets)
Adaptive learning
The right mode at the right time
MindFrame's Adaptive Learning Path reads your Cognitive Twin data and surfaces the mode that will produce the highest improvement gain for your current weakest dimension.
You never have to wonder what to work on next. The system knows which dimension is lagging, which mode targets it most directly, and presents it on your dashboard as your recommended next session.
- Reads your Cognitive Twin — not just your last session
- Prioritises by improvement potential, not by what you enjoy most
- Updates the recommendation after every session
- Explains why it chose the mode it did
Recommended next session
Calibration Lab
Foundation · Free
Your Calibration score has been your weakest dimension for 3 sessions. Calibration Lab specifically targets the confidence-accuracy gap — estimated +8pt improvement at your current trajectory.
Social & competition
Compete on the dimensions that matter
The MindFrame Hive leaderboard ranks by calibration and reasoning quality — not trivia scores. Cognitive Wars are head-to-head matches on specific metacognitive dimensions.
Hive Leaderboard
Ranked by Brier Score and composite metacognitive score — not just how many questions you answered. The top performers are the most calibrated, not the most aggressive.
- Weekly and all-time leaderboards
- Category-specific rankings
- Percentile benchmarks against the full Hive
Cognitive Wars
Head-to-head matches on a specific metacognitive dimension. Challenge a colleague or a random opponent — win by outscoring them on calibration and reasoning quality, not speed.
- Async format — no live scheduling required
- Dimension-specific matches
- Team vs. team competitions for Pro accounts
Certifications
Five specialisations — earned, not bought
MindFrame certifications are performance-gated. You cannot pay for them. Each one requires sustained performance across multiple sessions.
Calibration Expert
Brier Score below 0.10 sustained across 25 sessions. Your confidence is a reliable predictor of your performance.
Reasoning Pro
Average AI reasoning score above 85 across 20 sessions. Your arguments are logically sound and evidence-based.
Bias Detector
Identify bias correctly in 90%+ of Bias Hunter challenges across 15 sessions.
Forecasting Specialist
Top 10% Brier Score in Prediction Arena across 50+ tracked predictions.
Metacognition Master
All five dimensions above 80 simultaneously for five consecutive sessions.
Train your whole team to think better
MindFrame Teams gives managers visibility into collective thinking patterns — where your team overestimates, which biases are most prevalent, which members are improving fastest. Every team member gets a personalized training path.
- Shared analytics dashboard — collective strengths and blind spots
- Manager insights — engagement, completion rates, thinking patterns per person
- Team leaderboard and Cognitive Wars — competitive culture building
- Bulk seat management — add or remove members from one dashboard
- HR & L&D reports — exportable cognitive performance summaries per cohort
- Dedicated onboarding and priority support
Teams — early access
Leave your work email and we'll reach out with pricing and a demo.
Get early access for teamsNo commitment required.
Get started
Start your first session — free
Free plan includes 5 Foundation modes, calibration scoring, and session analytics. No credit card required.
Answer one challenge. No sign-up.
This is what a real MindFrame attempt looks like — confidence rating included. Your answer becomes one input the AI coach reads.