1.01^n

First principles thinking: what it actually means (and how to practice it)

"First principles thinking" has become one of those phrases that means everything and nothing. It appears in startup pitches, engineering blogs, and motivational talks, usually as a signal that the speaker is doing something more rigorous than average. Rarely does anyone explain what it actually requires.

Here is a more precise version: first principles thinking is the practice of identifying which of your beliefs are inherited assumptions and which have been verified, and then rebuilding the uncertain ones from evidence rather than analogy.

The analogy problem

Most reasoning is by analogy. You see a new situation, recognize it as similar to something you have seen before, and apply the same approach. This is efficient and often correct. It is also the thing that fails you in genuinely novel situations.

The problem is not analogy itself -- analogy is how humans learn. The problem is invisible analogy: when you treat inherited assumptions as if they were verified truths, without realizing you are doing so.

"We use a relational database because that's what we use." "We structure the team this way because this is how software teams are structured." "We deploy weekly because that's the release cadence." These are analogies. They may be correct for your situation. They may not. The question is whether you have actually evaluated them or just adopted them from elsewhere.

What rebuilding from scratch requires

Reasoning from first principles does not mean ignoring everything that is already known. It means being willing to ask whether it applies to your specific situation.

The practical steps:

Identify the assumption. Most things that feel like facts are actually decisions that someone made at some point, for reasons that may or may not still apply. Start by noticing what you are taking for granted.

Ask what you would believe if you did not already have this belief. This is harder than it sounds. Our assumptions are often so embedded that we cannot see them as assumptions. Thought experiments and good disagreement partners help here.

Trace the reasoning. If the assumption is correct, there should be a reason. Not "we've always done it this way" or "that's what the framework recommends," but an actual causal chain connecting the choice to outcomes you care about.

Test it where you can. Some assumptions can be verified directly. Others require accepting uncertainty until evidence accumulates. The important thing is knowing which category you are in.

The engineering application

This shows up most usefully in system design.

Rich Hickey's "Simple Made Easy" is partly about this: the distinction between things that are simple (low complexity, few interconnections) and things that are familiar (easy to reach for because we have used them before). Most design decisions optimize for familiar over simple, without explicitly choosing to do so.

Dan McKinley's "Choose Boring Technology" is the same argument from a different direction: new technologies have unknown failure modes. The cost of using them is the time spent learning those failure modes. This should be a conscious tradeoff, not a default.

Neither of these is telling you what to choose. They are giving you a framework for actually evaluating the choice rather than defaulting to pattern-matching.

The limits

First principles thinking is expensive. Rebuilding every assumption from scratch, all the time, is not practical or useful. Some things you inherit are correct. Some conventions exist because the people who developed them had good reasons.

The skill is not applying this process to everything. It is recognizing which decisions are significant enough, and which inherited beliefs are uncertain enough, to be worth the cost of examining carefully.

For routine decisions, use analogies and conventions. They are often right.

For decisions that are hard to reverse, or that will shape everything else, or where the standard approach is producing poor results: stop and build from the ground up.

The question is not "did I think about this?" but "did I actually verify the thing I am relying on?" Most of the time, the answer is no. And often, that is fine. But you should know when it is not fine, and be willing to do the harder work when it matters.