Mental Models for AI

January 2026


Most commentary on artificial intelligence falls into one of two camps. The first is panic: AI will take all jobs, concentrate power among a few corporations, and leave humanity obsolete. The second is hype: AI will solve climate change, cure cancer, and usher in an era of unprecedented abundance. Both camps share a habit of mapping AI onto familiar scripts from previous technological revolutions—the printing press, electricity, the internet—and then extrapolating wildly.


In this piece, I'd like to offer something different. Not predictions, but a framework. A way to think clearly about what AI actually is, who stands to benefit, and how to position oneself accordingly.


AI does differ from past technological shifts in one important way. The printing press, the steam engine, and the electrical grid all augmented human physical capabilities or extended the reach of human communication. AI augments cognition itself. And unlike engines that operate within well-understood thermodynamic limits, AI systems exhibit emergent behaviors that even their creators cannot well anticipate. On top of that, advanced AI can potentially improve its own architecture, creating a feedback loop where the technology accelerates its own development. This isn't the industrial revolution with better marketing, but it's likely something genuinely new, which means we need new mental models to navigate it.


The Three Properties of AI


In my view, AI has three key properties that are worth understanding: it is (i) alien, (ii) encyclopedic, and (iii) generative. Each property has implications for how we work, what skills matter, and where value accumulates. Understanding these properties won't tell you exactly what happens next, but it will help you think more clearly than most.


Alien


AI is alien in ways we tend to underestimate. It never fatigues. It has no legal liability. And often it triggers a deep anthropomorphic aversion in many people: we instinctively distrust something that mimics human intelligence without being human.


And these facts have consequences. We should expect backlash against over-reliance on AI, some of it largely unfair. When people feel displaced or unsettled, they push back, and AI makes an easy target. This means that communication, emotional intelligence, and what you might call personal "aura" become genuine differentiators. In a world where the marginal cost of generating competent text or analysis nears zero, people can frame ideas will stand out.


Encyclopedic


AI has been trained on the entire corpus of human knowledge. Also on the entire corpus of human error, but that's a separate problem. The point is that anything that has been written down, AI has probably seen.


This makes knowledge itself less scarce. If you can look up anything instantly, the premium shifts away from knowing things but toward framing things. Narrative construction matters more. Interpretive skill matters more. The researcher who can synthesize findings into a coherent story, who can explain why something matters, gains an edge over the researcher who simply accumulates findings.


Trust replaces expertise as the scarce resource (which is why I don't think AI will replace PhDs anytime soon). When anyone can sound knowledgeable, people need shortcuts to decide who to believe. Credentials help, but so do track records, institutional affiliations, and personal reputation. Building trust takes time; it compounds slowly. That makes it valuable precisely because AI can't shortcut it.


Generative


AI is very good at extracting patterns from its training data and reproducing them with high fidelity. This is what makes it useful: you can get a competent first draft, a working prototype, a rough design, in minutes rather than hours or days.


This rewards people who are rich in ideas and breadth. If execution becomes cheap, ideation becomes the constraint. Generalists and pseudo-polymaths—people who work across domains without deep expertise in any single one—find their stock rising. AI fills the knowledge gaps, letting integrative thinkers operate in territories they couldn't access before.


The generative property also advantages people who document their thoughts. If you've been taking notes for years, capturing observations, saving snippets of ideas, you've been building the perfect corpus for AI augmentation without knowing it. Your notes become raw material that AI can synthesize, cross-reference, and extend.


Who Benefits from AI?


Given these properties, who should expect to gain? The answer isn't simply "tech workers" or "people who learn to code." The traits that matter are actually less obvious.


Introverts. AI interfaces require none of the social energy that human interactions demand. The person who dreads networking events but possesses deep knowledge can now leverage that knowledge through tools that don't care about social performance. Ideas can be packaged and scaled without the traditional overhead of in-person engagement.


People with abundant ideas. Historically, each idea required significant time investment to realize. You might have ten concepts but only bandwidth to pursue one. As AI compresses the distance between ideation and creation, rapid prototyping across writing, design, code, music becomes possible.


Generalists and pseudo-polymaths. Those who work across multiple domains but lack deep expertise in any single area find themselves newly empowered. AI lets them simulate competence in specialized tasks, freeing them to focus on what they're actually good at: pattern recognition across domains, connecting disparate ideas, and integrating perspectives.


Those closest to end consumers. As AI compresses production costs in the middle layers of value chains, the premium shifts toward people who understand human needs and who can contextualize AI outputs for specific audiences. Teachers who integrate AI-generated materials thoughtfully. Designers who direct AI tools toward genuine human preferences. Businesses with direct customer relationships.


The organized. People who take notes diligently, document their thinking, and maintain systems for capturing observations. When you feed years of accumulated notes to an AI, patterns emerge that memory alone would never surface. A casual observation from January connects to a trend noticed in March. The organized have an unfair advantage, and they earned it before they knew it would pay off.


How to Make Yourself AI-Proof


Knowing who benefits is useful, but how do you actually position yourself so AI complements rather than replaces you?


Build a brand. In a world of infinite content, people need a reason to choose you. Brand is the shortcut for trust and answers the question of why one should allocate his or her scarce attention here rather than anywhere else. When AI can produce competent work in any style, your name becomes the signal that cuts through slop.


Be someone people want in the room. Here's a question worth asking: do people want you there, or do they just need the task done? If it's the latter, you're competing with AI. If it's the former, you're not. Presence matters, and warmth matters. The experience of being with a human who listens, who responds in real time, who brings energy or calm or humor to a room. Give people a reason to hang out with you rather than a chatbot.


Occupy roles that require liability. AI cannot assume liability (for now). When a doctor misdiagnoses or an engineer miscalculates, our legal and social structures provide recourse. Someone can be sued, fired, held accountable. In its current form, however, AI exists outside this infrastructure entirely, which means roles that require someone to be legally on the hook aren't going anywhere soon. The signature on the document, the license on the wall, the person who answers when something goes wrong—these will remain human for the foreseeable future.


Do what AI cannot. AI cannot replicate genuine human touch. The hand-pulled noodles whose slight irregularities tell stories of the artisan's decades of practice. The shared electricity of live sports, where thousands breathe and react together. The nurse whose presence provides comfort beyond medical necessity. These experiences derive their value precisely because humans create them. And the category is probably broader than you might think.


How to Use AI Well


Positioning yourself is one thing. Actually using these tools effectively is another.


Delegate verifiable tasks. AI is imperfect, much like humans. It makes mistakes, misunderstands context, and sometimes confidently produces nonsense. This means you should delegate tasks where you can easily verify the output. Writing works well: you can read what AI produces and judge whether it's good. Design works: you can see the result. Building prototypes works: you can test them. Songs, images, code—all verifiable. What doesn't work as well are tasks where you lack the expertise to evaluate the output, or where errors are subtle and consequential.


The joint hypothesis problem. When AI produces bad output, you face a problem familiar to researchers: you can't tell if the model failed or if you prompted poorly. Maybe the AI isn't capable of what you're asking. Or maybe you asked badly, and a better prompt would have worked. Two implications follow. First, don't dismiss AI after a few bad experiences, as it might have been you. Second, learning to prompt well is a genuine skill, not a trivial one. The people who get the most from AI aren't those with access to better models; they're those who've learned to communicate with the models clearly.


Avoid local minima.A common workflow with AI is to generate the first draft, then polish it. But here's the trap: when you start with something 90% done, you work within the boundaries it sets. You edit sentences rather than questioning whether these are the right paragraphs. The AI's first attempt becomes gravity, pulling all your subsequent thinking into its orbit. The good news is that the escape is simple: exploration is now basically free. Ask for ten wildly different approaches before committing to one. The local minimum only traps you if you stop exploring once you find the first peak.


Build your corpus. If you haven't been taking notes, start now. Every memo, every captured thought, every documented observation becomes raw material for AI to work with. The habit compounds: the more you capture, the more AI has to synthesize, and the more connections it can surface that you'd never find manually. This is much simpler than the fuzz about building a second brain or any other productivity fetish. AI just needs input, and your accumulated notes are the best input you can give it, because they're yours.


Coda


This framework isn't predictive in any strong sense. No one knows where AI goes from here: not the researchers building it, not the executives deploying it, not the pundits opining about it. The technology is too new and too strange for confident forecasts.


But you don't need predictions to prepare. You need ways to think clearly amid noise, to position yourself for what's likely rather than what's loudest. The three properties—alien, encyclopedic, generative—give you that foundation. The rest follows: who benefits, what to protect, and how to work effectively.


The goal isn't to beat AI. You won't. The goal is to become the kind of person AI makes more valuable, not less.


One final point: knowing all this is step one. Actually doing it is harder; inertia is real. You won't wake up tomorrow with a note-taking habit, a personal brand, and a prompt engineering practice. These things require the same slow accumulation as any other skill with regular practice, not occasional experimentation. The people who benefit from the AI era won't be those who read frameworks like this one. They'll be those who read it and then, over weeks and months, actually changed how they work. The framework is free. The habit is the hard part.