Dimensionalize. Antithesize. Metaphorize. These are cognitive tools in an abstract arsenal: directed reasoning that you can point at your problems.
They’re now available as a Claude Skills library. Download the Future Tokens skill library here, compress to .zip, and drag it into Claude → Settings → Skills (desktop). When you want Claude to run one, type “@dimensionalize” (or whatever skill you want) in the chat.
Language models should be good at abstraction. They are. The Future Tokens skills make that capability explicit and steerable.
In an LLM-judged test harness across dozens of skills, Future Tokens skill calls beat naïve prompts by roughly 0.2–0.4 (on a 0–1 scale) on insight, task alignment, reasoning visibility, and actionability, with similar factual accuracy. That’s roughly a 20–40 percentage-point bump on “reasoning quality” (as measured).
Abstraction means recognizing, simplifying, and reusing patterns. It’s a general problem-solving method. There’s no magic to it. Everyone abstracts every day, across all domains:
a database is an abstraction of events
a regression is an abstraction of data
a genre is an abstraction of media
“startup”, “capitalism”, “celebrity” — all abstractions
But most talk about abstraction stays at the noun level: “here is a concept that stands for a cluster of things.”
For thinking, abstraction needs verbs: what we do when we generate and refine patterns:
How do you flip a worldview and see its mirror? Antithesize.
How do you surface the assumptions that undergird a claim? Excavate.
How do you find problems that sound like the one you have? Rhyme.
How do you map your problem into a domain with clearer structure? Metaphorize.
How do you identify the key attributes that describe your choices? Dimensionalize.
This is what the skills are, and why I have named them using verbed nouns. Not metaphysics: reusable procedures for problem solving.
These aren’t totally original ideas. They’re what good thinkers already do. My contribution here is:
Naming them precisely enough to call on demand,
Scaffolding them into workflows with parameters, and
Making them callable by LLMs with “@skill_name”
Here’s the problem: abstraction is hard.
Humans are the only species that (we know) abstracts deliberately. Our brains are built for it, yet we still spend decades training to do it well in even one domain.
We’re all constrained by some mix of attention, domain expertise, time, interest, raw intelligence. I believe everyone abstracts less, and less clearly, than they would if they were unconstrained.
The failure modes are predictable:
under-abstracting: stuck in specifics, drowning in anecdotes, having the same thought on repeat without realizing
mis-abstracting: finding incorrect or harmful patterns (stereotypes, spurious correlations, conspiracies)
over-abstracting: losing contact with concrete reality, building elaborate systems that never get used
The skills don’t fix everything. But:
They make under-abstraction easier to avoid (with an arsenal of abstractions you can apply in seconds), and
They add enough explicit structure that mis-abstractions are easier to see and fix.
You still need judgment. You still need priors. You just get more structured passes over the problem for the same amount of effort.
Future Tokens is my library of cognitive operations packaged as Claude Skills. Each skill is a small spec that says:
when to use it (and when not to)
how to run it (what Claude should output)
what good looks like
The current public release includes 5 of my favorite operations. When to reach for them:
I’m having trouble choosing → “@dimensionalize” (find the axes and tradeoffs)
I don’t agree but don’t know why → “@antithesize” (generate a coherent opposing force)
I don’t know why they believe that → “@excavate” (surface the assumptions underneath)
I’m confused about what problem I have → “@rhyme” (find nearby domains and similar problems)
This feels familiar but I don’t know how → “@metaphorize” (bridge domains and pull back concrete implications)
Over time, you stop thinking “wow, what a fancy skill” and start thinking “oh right, I should just dimensionalize this.”
Language models are trained on the accumulated output of human reasoning. That text is full of abstraction: patterns, compressions, analogies, causal narratives. Abstraction, in the form of pattern recognition and compression, is exactly what that training optimizes for.
Asking an LLM to dimensionalize or metaphorize isn’t asking it to do something foreign or novel. It’s asking it to do the thing it’s built for, with explicit direction instead of hoping it stumbles into the right move. So:
Asking an LLM to dimensionalize is asking it to surface patterns of “people who think using tradeoffs.”
Metaphorize is “do analogies on purpose, and don’t forget the ‘map back’ step.”
Excavate is “do causal reasoning, and then do causal reasoning on your causal reasoning.”
The interesting discovery is that these capabilities exist but are hidden: simple to access once named, but nontrivial to find. The operations are latent in the model1.
Most of the “engineering” this work entails is actually just: define the operation precisely enough that the model can execute it consistently, and that you can tell when it failed.
I’ve been testing these skills against baseline prompts across models. Short version: in my test harness, skill calls consistently outperform naïve prompting by about 0.2–0.4 (on a 0–1 scale) on dimensions like insight density, reasoning visibility, task alignment, and actionability, with essentially the same factual accuracy. Against strong “informed” prompts that try to mimic the operation without naming it, skills still score about 0.1 higher on those non-factual dimensions. The long version is in the footnotes2.
The more interesting finding: most of the value comes from naming the operation clearly. Elaborate specifications help on more capable models but aren’t required. The concept does the work.
This is a strong update on an already favorable prior. Of course directing a pattern-completion engine toward specific patterns helps. The surprise would be if it didn’t.
I couldn’t find a compelling reason to gate it.
These operations are patterns that already exist in publicly available models because they are how good thinkers operate. I want anyone to be able to know and use what LLMs are capable of.
My actual personal upside looks like:
Being “the guy who systematized and named this”.
Stress-testing and improving the framework in the wild.
Getting pulled into more interesting conversations because people find it useful.
The upside of standardization is greater than the upside of rent-seeking. My goal isn’t to sell a zip file; it is to upgrade the conversational interface of the internet. I want to see what happens when the friction of “being smart” drops to zero.
So, it’s free. Use it, fork it, adapt it, ignore 90% of it. Most of all, enjoy it!
The current release is a subset of a larger taxonomy. Many more operations are in development, along with more systematic testing.
In the limit, this is all an experiment in compiled cognition: turning the better parts of our own thinking into external, callable objects so that future-us (and others) don’t have to reinvent them every time.
If you use these and find something interesting (or broken), I want to hear about it. The skills started as experiments and improve through use.
Download the library. Try “@antithesize” on the next essay you read. Let me know what happens!
