William Wear (@williamwear)

4 min read Original article ↗

How to Think Like a Language Model (And Stay Human)

When you’re under pressure to write clearly—especially in technical contexts—it’s easy to grab the fastest phrasing, default to familiar patterns, or outsource your judgment to autocomplete.

But what if the best upgrade to your thinking wasn’t a new tool, but a new frame?

Ironically, the best inspiration might come from the very thing we fear will replace us: the large language model.

A Mental Model for LLM Thinking

LLMs like GPT-4 don’t think like humans, but they do follow a surprisingly teachable sequence. Here’s the model I’ve started using in my own technical writing:

  1. Tokenization – Break it into meaningful chunks

  2. Pattern Matching – Compare those chunks to known good structures

  3. Probabilistic Framing – Iterate phrasing until it converges on clarity

  4. Map Construction – Rebuild the pieces into a useful, traversable surface

This isn’t just what the model does. It’s a way I can think, too—especially when I want to write something that won’t mislead, overwhelm, or abandon a reader.

A Real-World Case Study: MAAS Documentation

I used this model on a real paragraph from Canonical’s MAAS documentation. Here's the original:

“MAAS stores all state information in the PostgreSQL database. It is therefore recommended to run it in HA mode. Configuring HA for PostgreSQL is external to MAAS. You will, therefore, need to study the PostgreSQL documentation and implement the variant of HA that makes you feel most comfortable.”

It sounds cautious and helpful. But what it actually does is push the burden—and risk—onto the reader.

What the LLM Wrote

“To ensure high availability, MAAS relies on a PostgreSQL database configured in HA mode. MAAS does not manage PostgreSQL HA. It’s essential to implement an external solution, using tools like Patroni or Pacemaker. Relevant information is found in the official PostgreSQL HA documentation.”

It’s clearer. It suggests tools. But it’s a little sterile.

What I Wrote

“MAAS requires a highly available PostgreSQL database to function effectively in HA mode. While MAAS doesn't manage this directly, we recommend using tools like Patroni or Pacemaker to set up PostgreSQL HA. For a step-by-step guide, please refer to the PostgreSQL HA documentation.”

My version does the same work—but with warmth. It says, “You’re not alone. This is normal. Here’s a solid place to start.”

And the Surprising Part

Structurally, both versions are almost the same. Same flow. Same recommendations. Same clarity.

So what made them different?

The reasoning process behind each.

How LLMs Work vs. How Humans Think

Let’s talk about the deeper architecture behind those results.

LLMs: Pattern Without Purpose

LLMs don’t understand what they’re saying. They break input into tokens and predict what’s likely to come next based on massive statistical patterns.

They excel at:

  • Recognizing structure

  • Rewriting with coherence

  • Scaling across vast domains

But they lack:

  • Moral intuition

  • Stakes

  • Care

Humans: Meaning Before Mechanism

As Robert Sapolsky explains in Behave, our brains evolved in layers. We don’t just see words—we feel their implications. We model intent. We worry about consequences.

We simulate:

  • What if I misunderstand this?

  • How would I feel reading this under stress?

  • What happens if this fails in production?

We don’t just build sentences. We build trust.

Why This Matters

Thinking like an LLM gives you structure. Thinking like a human gives you responsibility.

When you combine both:

  • You anticipate ambiguity

  • You test clarity

  • You offer guidance, not just syntax

  • You stop writing sentences and start building maps

Practice This: Resources That Help

Mental Clarity & Deep Work

  • Deep Work – Cal Newport

  • Digital Minimalism – Cal Newport

  • Stolen Focus – Johann Hari

Better Thinking

  • Thinking in Systems – Donella Meadows

  • How to Take Smart Notes – Sönke Ahrens

  • Thinking, Fast and Slow – Daniel Kahneman

Communication

  • On Writing Well – William Zinsser

  • Information Architecture (4th Ed) – Rosenfeld, Morville, Arango

  • Docs for Developers – Mikulic et al.

Understanding LLMs

  • LLMs & Transformer Architecture – R. P. Menon

  • LLM Transformer Explained (Visual) – Poloclub

  • Understanding Transformers

Final Thought

Your users aren’t dumb. But they are overwhelmed.

They don’t need you to sound smart. They need to feel safe enough to move forward.

So yes—think like a language model.

But stay human.