Values Corrupting Logic
Emotional appeals are effective because human values consistently bias the interpretation of facts
Core Idea
Supply humans with sufficient facts, and logic will do the rest.
In this simple version of how society works, progress itself is framed as the steady march from superstition to objectivity, from tribal myth to empirical law.
If only people could see the numbers, the data, the rigorous peer-reviewed studies, then rational behaviour would follow.
It’s the enlightenment orthodoxy insisting that evidence is the handmaiden of reason.
And it is a comforting myth, a high-minded endorsement of the supremacy of logic in human affairs. The trouble is, it’s wrong, and not by a little.
Scratch the surface of any heated public controversy, take your pick from climate, diet, economics, bioethics and a host of others and what you see is not the triumph of evidence, but the dogged persistence of value-driven factionalism.
People cherry-pick, rationalise, or outright ignore mountains of data that threaten their priors. The shock isn’t that the reason occasionally fails. It’s that, as a rule, it bows to value and rarely dares speak its name.
However, a mindful sceptic knows all this.
Mindful sceptics know that underneath every logical justification sits a submerged moral axiom, quietly steering thought long before any spreadsheet is consulted.
Counterpoint
The myth of evidence-based reason dies hard.
In part, this is because the narrative is noble. Given enough good evidence, rational actors will make decisions that are logically sound and socially optimal. We teach this creed in classrooms, bake it into policy, and wield it in debates like a moral cudgel.
The Enlightenment itself rests on the belief not just in reason, but in reason’s inevitability if only ignorance is banished.
Yet, in practice, decisions are rarely the outcome of cool calculation. Even in domains like medicine, resource management, and even the purest science, where data saturation is evident everywhere, values walk in the door long before the peer-reviewed paper hits the desk.
Asked to weigh risk, cost, well-being, or justice, people default to gut-level preferences masquerading as conclusions. The scientist deludes herself that her priorities are rational. The policymaker wraps his bias in a cost–benefit cloak. The activist marshals facts to defend what was already a moral crusade.
Evidence, when it matters most, can’t compete with values.
Pure objectivity is not only rare, it is structurally impossible in a world of subjective stakes. Pretending otherwise is the adult version of believing in Santa Claus… technically possible, emotionally essential, and always revealed as wishful thinking in the end.
Thought Challenges
Audit a Controversy… Pick a real-world debate with significance, say climate policy, pandemic response, land use, or a political campaign. List the key arguments each side deploys. Then, for each, write down the value assumption that has to be true for their logic to hold. Whose values are being protected, whose are at risk? Watch how evidence is marshalled less to inform, more to justify.
Run the Values Test… Next time a strong piece of evidence lands in your lap from your Insta feed, news media or that expert on the telly, pause before accepting or dismissing it. And if you already hold the opposite value, how would you dispute or ignore this evidence? Practise shifting value lenses, and note how your logic adapts.
Closing Reflection
There’s no shame in having values.
Indeed, they are more essential to our mental well-being than data. But there is shame in pretending they came from logic alone.
A mindful sceptic recognises the value–logic entanglement as armour against hubris and tribal dogma.
Real clarity isn’t a triumph of evidence over values; it’s seeing and hearing witness to the inevitable conflict between them and acting accordingly.
The sooner this is faced, the more trustworthy the reasoning that survives.
Evidence Support
Kunda, Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108(3), 480–498.
TL;DR... reviews decades of psychological research showing that motivation affects reasoning through reliance on cognitive processes, such as accessing, constructing, and evaluating beliefs, which are biased in ways that lead to preferred conclusions. Individuals selectively accept, interpret, and recall evidence that supports their preexisting values or desires.
Relevance to insight... a classic paper that systematically demonstrates the mechanisms by which values and motivations bias reasoning, showing that the process of interpreting evidence is rarely neutral. It directly supports the stated insight by detailing how robust evidence can be rejected when it conflicts with what people want to believe.
Kahan, D. M., Peters, E., Wittlin, M., Slovic, P., Ouellette, L. L., Braman, D., & Mandel, G. (2012). The polarizing impact of science literacy and numeracy on perceived climate change risks. Nature Climate Change, 2(10), 732–735.
TL;DR... experimental study found that higher scientific literacy and numeracy do not make individuals’ perceptions of climate change risks more consistent with scientific consensus but increase political polarisation. Individuals with greater ability to process evidence were even more likely to interpret ambiguous information in a way that aligns with their ideological values.
Relevance to insight... directly refutes the Enlightenment assumption that more knowledge leads to more rational (and convergent) decisions, demonstrating instead that values and identity drive the interpretation of evidence—especially among the most informed.
Taber, C. S., & Lodge, M. (2006). Motivated skepticism in the evaluation of political beliefs. American Journal of Political Science, 50(3), 755–769.
TL;DR... experimental evidence that individuals engage in motivated scepticism: they subject counter-attitudinal arguments to more scrutiny while readily accepting information supporting their positions. The result is the reinforcement of prior attitudes, regardless of the objective strength of evidence.
Relevance to insight... supports the core insight by showing experimentally how values and preconceptions shape both the interpretation and acceptance of evidence, leading to persistent polarisation and resistance to logical counter-arguments.
Lord, C. G., Ross, L., & Lepper, M. R. (1979). Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. Journal of Personality and Social Psychology, 37(11), 2098–2109.
TL;DR... people presented with identical mixed evidence on controversial topics interpreted it in ways consistent with their initial beliefs, leading to increased polarisation. Evidence did not cause convergence around rational conclusions but drove divisions deeper.
Relevance to insight... foundational study is key to understanding why good evidence often fails to persuade: human evaluation of facts is filtered through a value-laden lens, matching the insight’s claim that values can override logical conclusions.
Mercier, H., & Sperber, D. (2011). Why do humans reason? Arguments for an argumentative theory. Behavioral and Brain Sciences, 34(2), 57–74.
TL;DR... the primary function of human reasoning is not to achieve objective truth but to justify one’s beliefs and convince others, favouring group cohesion and social success. This often results in confirmation bias and the rationalisation of preferred conclusions, even in the face of strong counter-evidence.
Relevance to insight... reframes rationality not as the pursuit of evidence-based beliefs but as a tool for defending values and positions, providing a comprehensive theoretical basis for the empirical findings that values often trump logic and reason.
Each of these papers is widely cited and considered foundational in the study of motivated reasoning, confirmation bias, and the role of values in the evaluation of evidence. They collectively demonstrate that, however robust, evidence is often subordinate to the values, identities, and motivations that underpin human decision-making.