On Mental Model Discovery
And How It Relates To Fast Thinking
mental model
An internal representation of how something works. It’s a simplified framework in your mind that helps you understand, predict, and reason about a system, concept, or situation. It shapes how you interpret information and make decisions, even if it doesn’t capture every detail of reality.
Mental models have had a moment. There are newsletters, blogs, books, entire websites devoted to cataloging them. The claim is that this succinct list of concepts will help people cut through the noise of everyday life and provide more leverage than the endless facts we rely on day-to-day. But the content is just lists. Hundreds of concepts, defined and illustrated, presented as a collection you are supposed to somehow acquire and deploy. This recreates the original problem at a higher level of abstraction. You started with too much information. Now you have too much information about information.
The premise is right, but the execution, mostly, leads us back to the original problem.
The missing ingredient isn’t more models. It’s understanding what a model actually is and why some of them are worth your time, while others should be allowed to be forgotten. A small number of concepts explain a large number of phenomena, and knowing them makes you a better thinker.
There’s even a mental model for that.
The 80/20 Mental Model
A mental model isn’t just a useful concept. It’s a pattern that recurs across contexts. It’s the structure that appears in one domain, then another, then another, until you realize you’ve been looking at the same thing wearing different clothes.
Here’s what discovering one looks like in practice:
Say you’re researching properties to buy and you notice that roughly 80% of them are owned by about 20% of the local population. Interesting. You file it away.
Useful Tidbits
└── 80% of property owned by 20% of ownersA while later you’re in the emergency room and staff mention, in passing, that about 80% of injuries in that facility come from 20% of the hazards. That same ratio again.
Useful Tidbits
├── 80% of property owned by 20% of owners
└── 80% of injuries caused by 20% of hazardsBack at work as a software developer, bugs have piled up. You wonder: “Do 80% of the issues come from 20% of the bugs?” You test it. It does.
Useful Tidbits
├── 80% of property owned by 20% of owners
├── 80% of injuries caused by 20% of hazards
└── 80% of errors caused by 20% of bugsYou now have _three observations_. But more importantly, you have _one pattern_. It has a name: The Pareto Principle, after an Italian economist who made the same property ownership observation in the late 19th century. And because you’ve recognized it as a pattern rather than three separate facts, you can carry it into new situations and ask: “Where does 80/20 apply here?”
That’s the core principle: Not acquiring a concept, but recognizing a recurring structure, naming it, and placing it somewhere you can find it again.
Semantic Tree
└── Mental Models
└── Pareto Principle
├── Property: 80% owned by 20% of owners
├── Health: 80% of injuries from 20% of hazards
└── Software: 80% of errors from 20% of bugsOne concept. Many situations. That’s a mental model with leverage.
Why We Never Learned This
It’s not that mental models were absent from school. They were everywhere. They were just hidden in plain sight.
Take friction for example: You almost certainly learned about it in physics. Force, resistance, the coefficient of friction, a well-defined concept with equations attached...What you probably weren’t taught is that friction also appears in information theory, in organizational behavior, in product design, in economics. Anywhere energy is lost due to resistance, friction is the concept doing the work.
Before
├── Physics
│ └── Mechanics
│ └── Friction
├── Information Theory
│ └── Signal Degradation
└── Organizational Theory
└── Bureaucratic Drag
After
├── Mental Models
│ └── Friction
│ ├── resistance between surfaces (Physics / Mechanics)
│ ├── signal degradation (Information Theory)
│ └── bureaucratic drag (Organizational Theory)
└── Subjects
├── Physics
├── Information Theory
└── Organizational TheoryThe concept was taught. Its recurrence across contexts was not. And that recurrence is precisely what makes it a mental model rather than a piece of trivia.
Public education optimized for coverage. More subjects, more facts, more equations, more observations. The implicit assumption was that students would somehow synthesize it all, would notice the friction in physics class and connect it to the friction in economics class years later. Some do. Most don’t. And why would they? Nobody told them that was the point.
More facts is not the answer. Better organization is. The question isn’t how many concepts you know. It’s whether the ones you know are connected to each other in ways that give you leverage on the day-to-day.
The Process Is The Point
There’s a principle in software development called DRY: “Don’t Repeat Yourself.” Typically, if the same logic appears in two places (or more) in your code, you’re doing it wrong. You consolidate them into one copy, give it a name, and reference the single version from both places. The result is cleaner, more maintainable, and easier to improve, because any change you make propagates everywhere automatically.
The same principle applies to concepts. When you notice the same pattern appearing in multiple contexts, that’s a signal: This pattern deserves its own node. Extract it, name it, place it somewhere in the structure of your knowledge where it can do work across all the situations where it applies.
This is what the best thinkers do naturally. They’re not collecting concepts; they’re refactoring them -- restructuring, consolidating, and simplifying until the same work gets done with less. DRY is one tool in that broader process. And the mental model list is the output of that process. What most mental model newsletters, blogs, books, and websites skip is the process itself.
It’s not just about knowing the vital few concepts among the trivial many. It’s knowing how to find them.
How A Mental Model Executes
It’s ironic to observe the explosion of interest in mental models when one considers what happened at the same time in academia.
The field of cognitive psychology, particularly the work of Daniel Kahneman, has been enormously influential in recent decades. The key insight is that human thinking operates in two modes, one fast and automatic, one slow and deliberate, which is documented in his book Thinking, Fast and Slow. These conceptions have reshaped how people think about thinking. From this has come a tendency to treat the fast, automatic mode with suspicion. We call its outputs cognitive heuristics and cognitive biases and treat them as errors to be corrected by the slower, more careful mode.
This framing is useful but incomplete. It captures something real; automatic thinking does produce systematic errors, and being aware of those errors matters. But it misses something equally important.
Decades before Kahneman, the mathematician George Polya was studying a different aspect of heuristics. In How to Solve It and the works that followed, Polya proposed a formal study of what he called modern heuristics: The deliberate problem-solving and discovery strategies that experts deploy when approaching new problems. Work backwards from the solution. Find an analogous problem you’ve already solved. Draw a diagram. These aren’t automatic shortcuts; they’re tools you consciously select and apply. What we did earlier in this post was exactly that: We noticed the same ratio appearing in property, then medicine, then in software, and asking whether the pattern held.
So Kahneman and Polya look like they’re talking about different things, but they’re not. They’re describing the same mechanism at different points of use. What Kahneman described by “cognitive heuristic” is similar to the fast execution of a software program. Bugs can surface and you can’t do much about it during execution. But we are not helpless: When we find bugs, we fix the code. What Polya described with “modern heuristics” was how to amend the codebase.
What Kahneman called “cognitive heuristics,” what Polya called “modern heuristics,” -- might be viewed as two sides of the same “mental model” coin.
We depend on automatic execution to function. You cannot live a conscious, deliberate, System 2 existence at all times; there isn’t enough attention in the world for that. The automatic patterns run constantly, handling the vast majority of what you do, freeing up your deliberate mind for the small fraction of situations that genuinely require it.
The question is whether what’s running automatically is well-formed or not. Whether the patterns that fire without your awareness are ones you’d endorse if you examined them. Whether the heuristics guiding your decisions in the background are reliable ones.
Mental models, properly understood, are the automatic heuristics we rely on. Whether they serve us well depends entirely on how deliberately we built them.
The Pareto Principle becomes a mental model when you’ve internalized it deeply enough that you reach for it without thinking, when you walk into a new situation and your pattern-recognition fires: *Where’s the 20% here?* That automaticity isn’t a failure. It’s the point. It’s what you were building toward when you did the deliberate work of noticing, naming, and placing the pattern.
The goal isn’t to eliminate fast thinking. It’s to improve the structure it relies on.
What This Means in Practice
Mental models caught on because people sensed something was missing. The information age delivered more content than anyone could process, and the intuition that a small number of patterns could cut through the noise was correct.
But, while pre-constructed lists are useful, what most mental model content misses is that the value isn’t just the list. It’s in the process of building it: Noticing recurrence, consolidating patterns, and placing them in a structure where they can do work. And it’s in understanding that the structure you build isn’t just an intellectual exercise. It’s the thing that runs automatically when you’re not thinking about it.
The real leverage of mental models isn’t knowing more of them. It’s knowing which ones are worth the deliberate work of cultivating and then doing that work until they run on their own.
WikiBonsai is tooling built for precisely this kind of process.





