← Back to Thoughts
September 25, 20258 min

Why I Try to Stay Non-Partisan (And Why That's Harder Than It Sounds)

It's not neutrality or false balance. It's recognition that I could be wrong, that I'm an outsider to American democracy, and that genuine wisdom of crowds requires authentic independent thinking.

critical-thinkingframeworksdemocracyintellectual-humilityall-books

People sometimes ask why I don't take explicit political positions despite spending so much time analyzing political and social systems. They notice I write about incentive structures in education, rational dysfunction in institutions, market dynamics in luxury goods, but I rarely conclude with "therefore you should vote for X" or "this policy is obviously right."

This isn't neutrality. I have strong political opinions formed through careful analysis. I can defend them. I update them when evidence changes. But I deliberately keep them out of my public work, and that choice is harder to maintain than you might think.

The Intellectual Humility Argument

The first reason is honest uncertainty about my own correctness. I've done the research, built the frameworks, reached conclusions I can defend with evidence and reasoning. But I'm one person with one particular brain shaped by one particular set of experiences. I could be wrong.

This isn't false modesty or epistemic relativism. Some positions are clearly better supported by evidence than others. Some arguments are obviously stronger than others. But on complex political questions where reasonable people examining the same evidence reach different conclusions, my confidence should be calibrated to the actual strength of my position. And that confidence is rarely 100%.

Research on motivated reasoning reveals something uncomfortable about human cognition: we're all vulnerable to using our intelligence to rationalize rather than reason. Dan Kahan's work on identity-protective cognition shows that on politicized issues, even people with high cognitive skills use those skills to defend their tribal position rather than pursue truth. The smarter you are, the better you are at constructing sophisticated justifications for what you already wanted to believe.

I'm not immune to this. Nobody is. The best I can do is maintain awareness of the bias and build in checks: actively seek evidence that contradicts my position, engage with the strongest counterarguments rather than the weakest, hold beliefs proportional to evidence, update when circumstances change.

But here's the key: if I'm uncertain enough about my conclusions to maintain this epistemic humility with myself, I'm certainly not certain enough to demand that others agree with me. I can share the analytical frameworks that led me to my conclusions. Whether someone else reaches the same conclusions using those frameworks depends on their values, experiences, and context.

The Outsider Perspective

The second reason is simpler: I'm not American. I was born in Mumbai, adopted and raised in Singapore, moved to the US as an adult. I'm deeply embedded in American life now, navigating American institutions, building businesses and a life here. But I'm still an observer of American democracy, not a native participant in it.

This gives me a particular kind of analytical distance. I can see patterns and incentive structures that people raised within the system might take for granted. I can compare American institutional dynamics to what I've observed in other countries. I can analyze the system without the emotional weight of it being "my" system from birth.

But that distance also means I'm not sure I have the standing to tell Americans how they should govern themselves. I can provide analytical tools for understanding how incentive structures shape political behavior, how rational dysfunction emerges in democratic institutions, how historical patterns repeat. But the ultimate choices about values and tradeoffs belong to people whose stake in the outcome is deeper than mine.

There's something presumptuous about an outsider, no matter how well-informed, declaring the correct answer to complex questions about a democracy they weren't raised in. I can contribute analysis and frameworks. I should be careful about prescribing conclusions.

The Wisdom of Crowds

The third reason connects to a core belief about democratic decision-making: genuine wisdom of crowds requires authentic independent thinking, not manufactured consensus or tribal coordination.

James Surowiecki's research on collective intelligence shows that crowds make better decisions than experts when three conditions are met: diversity of opinion, independence of judgment, and effective aggregation mechanisms. The key insight is that the crowd must consist of people thinking independently and bringing different information and perspectives. When people simply copy each other or coordinate around tribal positions, you don't get wisdom of crowds. You get herding.

This means that for democracy to work well, we need people reaching their own conclusions through rigorous analysis rather than adopting ready-made positions from their tribe. We need genuine diversity of thought, not just demographic diversity or the appearance of disagreement between pre-coordinated factions.

If we could truly hear every person's authentic reasoning—the actual fears, needs, values, and tradeoffs driving their positions—we'd collectively get closer to truth than any individual perspective could reach alone. But that requires creating space for people to think independently using shared analytical frameworks, not telling them which conclusions those frameworks should produce.

Staying non-partisan and focusing on frameworks rather than conclusions is my attempt to contribute to the conditions that make democratic wisdom possible. If my frameworks are only useful to people who already agree with my political conclusions, they're not frameworks. They're tribal markers disguised as analysis.

ℹ️

The Research Shows: Open classroom climates where students feel safe expressing unpopular opinions and engaging with diverse viewpoints significantly increase civic participation and democratic engagement. Political tribalism, by contrast, turns evidence and reasoning into weapons for defending predetermined conclusions.

The Practical Accessibility Argument

The fourth reason is entirely practical: I want everyone to use these analytical tools, regardless of their political identity or starting beliefs.

Systems thinking, incentive analysis, rational dysfunction frameworks—these tools work across ideological lines. A conservative can use the Incentive Quadrant to analyze government programs. A progressive can use it to analyze corporate behavior. A libertarian can use it to analyze both. The framework itself is politically neutral even though its applications might lead to politically charged conclusions.

But if I package these frameworks with explicit political conclusions, I immediately cut my potential audience in half. People who disagree with my politics will dismiss the frameworks as partisan reasoning dressed up as analysis. They'll never engage with the tools that might have helped them think more clearly about complex systems.

This isn't capitulation or false balance. I'm not pretending all political positions are equally valid or refusing to acknowledge when evidence clearly supports one view over another. But I am recognizing that people are more likely to engage with analytical frameworks when those frameworks aren't pre-loaded with tribal markers.

If someone uses my frameworks rigorously and reaches different conclusions than mine, that's not a failure. That's the system working. They showed their work. They acknowledged tradeoffs. They can defend their reasoning. We just weight values differently or have access to different contextual information.

Why It's Harder Than It Sounds

Maintaining this non-partisan stance requires constant discipline because political positioning offers immediate rewards that genuine analysis doesn't.

Taking a clear political position signals tribal membership. It tells your audience "I'm one of you" and immediately builds connection with people who share that identity. It's satisfying to have your existing beliefs validated. It feels good to feel like part of a team fighting for what's "right" against those who are "wrong".

Refusing to take sides, by contrast, leaves everyone slightly dissatisfied. People who agree with your analysis want you to go further and validate their conclusions. People who disagree see your refusal to choose sides as cowardice or privilege. Nobody gets the affirmation they're seeking.

The incentive structure of modern media and social platforms rewards partisan positioning. Content that reinforces tribal beliefs gets shared within that tribe. Nuanced analysis that refuses to pick a side gets ignored or attacked by both sides. The algorithms optimize for engagement, and engagement comes from making people feel good about their team or angry at the other team.

I feel this pull constantly. When I see policy decisions that seem clearly wrong based on my analysis, I want to say so explicitly. When I observe rational dysfunction producing obviously harmful outcomes, I want to point directly at the responsible parties. The urge to take sides and join the fight is strong.

But every time I give in to that urge, I sacrifice the ability to reach people outside my tribe. I trade long-term impact for short-term satisfaction. I turn analytical frameworks into partisan weapons rather than universal tools.

What This Doesn't Mean

Staying non-partisan doesn't mean pretending all positions are equally valid or refusing to acknowledge when evidence clearly points in one direction. Some factual claims are true and others are false. Some arguments are sound and others contain logical fallacies. Some policies are supported by evidence and others contradict it.

When I analyze luxury pricing dynamics or educational policy changes or social media's cognitive effects, I follow the evidence wherever it leads. I don't artificially balance conclusions to appear neutral. If the research clearly supports one interpretation over another, I say so and explain why.

What I don't do is extend that factual analysis into prescriptive political conclusions that depend on value judgments and contextual tradeoffs. The evidence might clearly show that a policy produces certain outcomes. Whether those outcomes are acceptable depends on how you weight competing values, and that's where political disagreement legitimately exists.

I can tell you what the incentive structures are. I can show you the rational dysfunction patterns. I can explain the historical parallels and likely trajectories. But I can't tell you which tradeoffs you should accept or how to weight competing priorities. That depends on your values, your context, your assessment of risks.

The Long Game

Democracy requires citizens capable of independent thought, not citizens who've adopted the correct opinions. When we optimize for agreement rather than reasoning capacity, we undermine the wisdom of crowds by creating crowds of people who can't actually think independently about complex problems.

My contribution is providing analytical frameworks that help people think more rigorously about systems, institutions, and incentives. Whether they use those frameworks to reach conservative, progressive, libertarian, or entirely novel conclusions is up to them.

I stay non-partisan not because I lack opinions or think all positions are equally valid. I stay non-partisan because I believe the frameworks are more valuable than my conclusions, because I could be wrong, because I'm an outsider to American democracy, and because genuine wisdom of crowds requires authentic independent thinking.

It's harder than taking sides. The incentives push constantly toward tribal positioning. But it's the only way to build something that serves democracy rather than just another faction within it.


Related in This Series:
For the foundations of this approach, read How to Think vs What to Think: Why I Focus on Frameworks, Not Conclusions. To understand how even smart people fall into rationalization traps, see Rationalization vs Rationality: The P-Hacking Problem.

Subscribe to What Interests You

Choose what you want to follow. No spam, just updates on what you care about.

📊 Systems Thinking

Non-fiction book updates, incentive analysis, frameworks

💻 Conscious Tech

App development, technical philosophy, tools that respect users

🎨 Story & Sound

Fiction chapters, music releases, creative process

🔥 The Whole Journey

Everything—weekly updates across all projects

Email subscription coming soon. For now, follow on: