Intelligence, as measured by IQ and similar tests, correlates strongly with many good outcomes: academic performance, income, problem-solving speed. It's real, and it matters. But a striking body of research suggests that above a certain threshold, raw intelligence starts to generate new problems — specifically, the ability to build elaborate, convincing justifications for whatever you already believe.

Psychologist Jonathan Evans called this "myside bias" — the tendency to evaluate evidence and arguments through the lens of your existing position. Smarter people, his research found, are better at myside reasoning: they generate more supporting arguments, spot more weaknesses in opposing views, and are more convincing to themselves and others. This is not the same as being more accurate.

Intelligence as a double-edged tool

Think of raw intelligence as a tool with increasing sharpness. More intelligence means faster pattern recognition, better working memory, quicker synthesis of complex information. These are genuine advantages.

But a sharper tool cuts in all directions. The same cognitive resources that let a brilliant analyst model a complex system can also be turned toward defending an incorrect prior with extraordinary sophistication. The highly intelligent person can build a steel-man of their position that almost nobody can rebut — even when the position is wrong.

The philosopher's term for this is motivated cognition: reasoning in service of a predetermined conclusion rather than truth-seeking. Less intelligent people do this too, but they're often easier to argue with because their defenses are simpler. Highly intelligent people can construct genuinely impressive-looking arguments for incorrect positions.

What metacognition adds

Metacognition — thinking about your own thinking — is the countermeasure. A metacognitive thinker not only reasons about the world, they reason about their reasoning process. They ask: "Am I looking for evidence that confirms this, or evidence that tests it? Am I updating my beliefs proportionally to the data, or defending a prior commitment? Would I accept this reasoning if it pointed in a direction I didn't want?"

These questions require stepping outside your own cognition and observing it from a distance. Intelligence helps you reason within a frame; metacognition helps you examine and question the frame itself.

Research on expert performance consistently finds that metacognitive awareness is a stronger predictor of long-run accuracy than intelligence alone. In Tetlock's forecasting studies, the best predictors were distinguished by metacognitive habits — monitoring their own reasoning, actively seeking disconfirming evidence, separating their identity from their predictions — not by raw intellectual firepower.

The expert blind spot problem

One specific failure mode worth highlighting: expertise can generate overconfidence in related but distinct domains. A world-class epidemiologist may be poorly calibrated when making predictions about economic policy, but their confidence — built from genuine expertise in one field — transfers incorrectly to the adjacent domain. This is especially common in intellectuals who are used to being right.

Metacognition provides a partial fix: explicitly tracking which domains you have genuine evidence-based track records in, versus which you're reasoning by analogy from expertise elsewhere. The question "Am I an expert here, or do I just feel like one?" is harder to answer than it sounds, and high intelligence often makes it harder, not easier.

Training metacognition separately from intelligence

This is the core reason metacognition training is a distinct discipline from general intellectual training. Reading more books makes you more informed. Solving harder puzzles builds reasoning skills. But neither of these directly trains the self-monitoring, confidence calibration, and bias detection that constitute metacognitive skill.

Those skills require a different kind of practice: receiving feedback on your confidence levels (not just your accuracy), being confronted with your own inconsistencies, learning to detect the signals that precede your own reasoning errors. It's uncomfortable in a different way than intellectual challenge — less about effort, more about honest self-observation.

The goal isn't to be less confident. It's to have your confidence be a reliable signal rather than a reflection of identity, expertise signaling, or wishful thinking. The most intellectually accomplished people in Tetlock's research weren't the ones with the highest IQs — they were the ones who had combined intelligence with honest, disciplined metacognitive habits.