Absolute Beginners++
Everyone’s Teaching You About AI. Nobody’s Teaching You How to Think.
I spent a year watching people make decisions about AI. Not researchers or engineers — the people one layer out. Product managers evaluating vendor tools. Consultants advising clients on adoption. Founders deciding where AI fits in their stack. Team leads trying to figure out whether a new capability means they should restructure their workflow or ride out the hype cycle.
The pattern was remarkably consistent. Someone encounters an AI decision — should we adopt this tool, how should we evaluate it, why isn’t our deployment working, what do we do now that the tool we picked six months ago is outperformed by something released last Tuesday. And instead of slowing down to figure out what they’re actually deciding, they react. The experienced people pattern-match against previous technology cycles. The enthusiastic people expand scope before they’ve defined scope. The cautious people wait for someone else to go first. Reaction, reaction, paralysis.
What almost nobody does is orient — pause to understand the problem before trying to solve it.
This isn’t a criticism. It’s the normal human condition. The same thing happens when a couple browses apartment listings for two weeks without realizing they’re optimizing for different futures, or when a team runs a quarterly planning process that produces a plan nobody follows. People act before they understand, not because they’re lazy, but because understanding feels like inaction and action feels like progress. The gap between reacting and thinking is invisible from the inside.
I wanted to close that gap. So I started looking for methods — and found something I wasn’t expecting.
I went looking for one framework and found five. Then I realized they were all the same framework.
George Pólya’s How to Solve It (1945) gave mathematics a four-step problem-solving heuristic: understand the problem, devise a plan, carry out the plan, look back. It’s endured for eighty years because Pólya wasn’t really describing a method for math. He was describing a method for thinking about unfamiliar problems.
John Boyd’s OODA loop — Observe, Orient, Decide, Act — was built for fighter pilots making decisions under lethal time pressure. Boyd’s central insight wasn’t the loop; it was that orientation does the real cognitive work. Everything downstream is only as good as your model of the situation.
The U.S. Army’s crawl-walk-run is a training doctrine applied to every capability from rifle marksmanship to combined arms operations. You don’t practice the hard version until you’ve verified the easy version. You don’t combine skills until each one is solid on its own.
Benjamin Bloom’s mastery learning research demonstrated that students who achieve genuine understanding before advancing outperform control groups by two standard deviations — equivalent to moving an average student to the 98th percentile. The mechanism is simple: verified foundations support load. Unverified ones don’t.
James Paul Gee’s learning principles from video games explain why games produce deep learning so efficiently. They drop you in over your head, give immediate feedback, let you fail cheaply, and make identity formation part of the process. You don’t just learn a game’s mechanics — you become someone who thinks in them.
Five frameworks. Five unrelated domains. Eight decades of independent development. No shared vocabulary, no shared methods, no shared assumptions about how the world works.
All five describe the same three-layer structure: orient before you execute, execute in small complete verifiable loops, reflect to ratchet understanding forward.
Pólya
Mathematics
1945
Boyd
Military strategy
1961
U.S. Army
Training doctrine
1970s
Bloom
Education research
1984
Gee
Game design / literacy
2003
Orient
— Understand before you act
Execute
— Small, complete, verifiable loops
Reflect
— Ratchet understanding forward
Five frameworks, five domains, eight decades — same three-layer structure
I didn’t expect the convergence. I was reading Pólya alongside Boyd because both were relevant to a problem I was working on, and the structural mapping was so clean it stopped me. I pulled in the Army doctrine because I’d trained under it — same skeleton. Then Bloom, which I knew from a different context. Then Gee, who shouldn’t map at all (video games have nothing obvious in common with military training) — and maps perfectly. At some point the question flipped from “can I synthesize these?” to “why hasn’t anyone noticed these are isomorphic?”
The answer, I think, is domain walls. Pólya lives in mathematics education. Boyd lives in military strategy and business. Bloom lives in educational psychology. Gee lives in literacy studies and game design. The Army’s doctrine lives in field manuals that academics don’t read. Each framework is well-known inside its discipline and nearly invisible outside it. The convergence is only visible if you happen to be reading across all five — which almost nobody does, because there’s no obvious reason to.
The convergence matters because of what it reveals about this particular moment. In stable domains, expertise is a massive advantage — your pattern library is calibrated, your intuitions track reality, your experience produces reliable shortcuts. But when the territory shifts faster than maps can update, expertise develops a structural liability: confident pattern-matching against a landscape that no longer exists. Not a character flaw. A feature of how expertise works. You get good at recognizing situations. Then the situations change.
In AI, this has been visible for years. When large language models began demonstrating unexpected capabilities, the people most consistently wrong about what was possible were domain experts in NLP — not because they were less intelligent, but because they had strong priors trained on a paradigm that had just ended. The people most consistently right were often outsiders who tried the thing and observed what happened, unburdened by a model telling them it shouldn’t work.
This isn’t an argument for ignorance over knowledge. The argument is narrower: in a fast-moving domain, the absence of stale priors is a genuine structural advantage — but only if you have a method for making sense of what you’re seeing. Beginner’s mind without method is confusion. Beginner’s mind with method is openness plus traction.
The methodology is one thing. Turning it into something people actually use is a different problem — a product problem. Research that stays in synthesis form doesn’t change behavior. I needed a delivery mechanism that would survive contact with how people actually learn.
The key design decision was what I call wrong-first pedagogy.
Natural mistake
Smart, motivated person follows default behavior
Recognition
Reader watches the mistake from inside — and recognizes it
Corrective move
Method earns its way in — resolves the failure the reader felt
Identity shift
Learner doesn't use the method — they think from inside it
Design decision
Direct instruction — "here are the five steps, now apply them" — is how most methodology books work, and why most don't change behavior. Recognition is a stronger learning signal than instruction. Wrong-first reverses the sequence so the method earns its authority from the reader's own experience.
Wrong-first works because recognition is a stronger learning signal than instruction. When you watch someone make the mistake you’ve made — from inside their reasoning, where it feels justified — and then see what changes when they orient first, the method earns its way in. You don’t learn it as a rule imposed from outside. You learn it as the thing that would have prevented your own failure. That’s Gee’s identity principle in action: the method becomes part of how you think, not something you perform.
The alternative — direct instruction, “here are the five steps, now apply them” — is how most methodology books work, and it’s why most methodology books don’t change behavior. You read them, you nod, you agree, you never use them. The spectator failure mode. I’ve watched it happen with people who were smarter and more experienced than the problem required. The fix isn’t better frameworks. It’s a better sequence: encounter the failure, feel its logic, then learn the move that resolves it.
What I keep returning to — and what I can’t yet prove — is whether this kind of structured methodology changes how someone approaches problems they haven’t encountered yet. You can demonstrate a method. You can scaffold it with worksheets and reference cards. But the real test is whether someone who has practiced Orient > Execute > Reflect reaches for it on a novel problem, unprompted, without the book open.
That’s an identity shift, not a skill acquisition. The deepest learning doesn’t produce someone who uses a method — it produces someone who thinks from inside one. The contemplative traditions have always understood this. So has every serious training culture. The question is whether a book-length intervention can catalyze it, or whether that kind of shift requires something more — practice environments, feedback loops, a community, a tool that meets you in the moment of decision.
I think it probably requires more. The book is the first layer. What the method actually wants to be is an environment — something that can orient alongside you in real time, surface the wrong-first you’re about to make, scaffold the move you need without removing the cognitive work that makes it stick. That’s an AI product, not a PDF. It’s the version of this project I haven’t built yet, and the version I think matters most.