Productivity metrics are up. Cognitive exhaustion is up faster. These two facts are related.
Press enter or click to view image in full size
There’s a version of the AI productivity story that sounds completely reasonable: before AI, knowledge workers spent a significant portion of their time on low-value-added tasks. Boilerplate code, repetitive documentation, formatting, routine searches, template emails, mechanical first drafts. AI handles most of that now. What remains is the interesting part: judgment, design, decision-making, creative problem-solving. Pure signal, no noise.
This story is told by people who have never seriously thought about how human cognitive systems actually work.
The boring work wasn’t just low-value output. It was low-demand input. And low-demand input, distributed across a workday, is not inefficiency. It is the recovery mechanism that makes sustained high-intensity cognitive work possible.
Remove it, and you don’t get a more productive human. You get a human running at maximum intensity with no periodization. Every athlete knows what happens next.
The attention economy nobody is tracking
In 1989, Rachel and Stephen Kaplan published what became known as Attention Restoration Theory ¹. The core insight was that directed attention, the kind required for focused, deliberate cognitive work, is a depletable resource. It fatigues. And it restores not through passive rest alone, but through a specific type of low-demand engagement they called “soft fascination”: activities that hold attention gently without requiring deliberate effort. Walking in a park. Looking out a window. Doing something familiar and repetitive.
The theory was developed to explain why natural environments reduce mental fatigue. But the mechanism it describes applies directly to the structure of cognitive work. The mind does not switch cleanly between “on” and “off.” It moves through cycles of intensity and restoration. Activities that sit in the middle of that spectrum, engaging enough to hold attention, simple enough not to tax it, are not wasted time in a cognitive workload model. They are the valleys between peaks. They make the peaks possible.
In a pre-AI workday, those valleys were everywhere. Writing a routine status update. Reformatting a document. Filling in the repetitive parts of a familiar code structure. Running a query you’ve run a hundred times. None of it required deep thought. All of it occupied enough attention to keep the mind gently engaged rather than idle. And in doing so, it allowed directed attention to partially restore between the moments it was actually needed.
AI has systematically eliminated this layer. What remains is almost entirely peak-demand work: architectural decisions, complex debugging, ambiguous problem framing, stakeholder communication under uncertainty, judgment calls with incomplete information. The cognitive intensity per hour has gone up. The recovery built into the structure of the day has gone down. And the people working this way are being evaluated on the productivity metrics that went up, not on the depletion metrics that nobody is measuring.
Ego depletion and the decision quality curve
Roy Baumeister’s research on self-regulation introduced a concept that generated significant debate but has accumulated substantial empirical support in applied contexts: the idea that the capacity for deliberate, effortful thinking is not unlimited within a given period ². Judges give harsher sentences before lunch. Doctors order more unnecessary tests late in the afternoon. The quality of complex decision-making degrades as the session progresses, not because people become less intelligent, but because the resource that effortful thinking draws from is not infinitely renewable within the span of a workday.
The practical implication for AI-augmented work is straightforward and uncomfortable: if the low-demand tasks that used to punctuate the day have been removed, and what remains is an unbroken sequence of high-demand cognitive work, the degradation curve starts earlier and drops further.
The first hour of architectural design in an AI-augmented environment is probably sharper than it ever was before: no friction, no mechanical overhead, pure focused engagement with the real problem. The fourth hour of that same session, with no soft fascination in between, no cognitive valleys to restore from, is probably worse than it would have been in a pre-AI workday that included some mechanical breaks.
The productivity measurement captures the first hour. It does not capture the fourth.
A different kind of exhaustion
Traditional burnout has a recognizable profile. It develops through prolonged exposure to high-demand, low-control situations. The exhaustion is emotional and motivational as much as cognitive: the person loses the capacity to care, not just the capacity to think.
What is emerging in AI-augmented knowledge work is something with a different texture. The work is often genuinely interesting. The autonomy is real. The control is, in many respects, higher than before. The motivational signals are not obviously depleted. But the cognitive exhaustion arrives faster and runs deeper, because the intensity is sustained without the natural breaks that used to be embedded in the workflow.
The German occupational psychologist Winfried Hacker described this distinction decades ago in his work on mental workload ³. He differentiated between tasks that generate fatigue through emotional depletion and tasks that generate fatigue through sustained cognitive effort without adequate variation. The second category doesn’t produce classic burnout symptoms immediately. It produces something more insidious: a gradual narrowing of cognitive flexibility, increasing rigidity in problem-solving, reduced capacity for the kind of lateral thinking that complex work actually requires, while the person’s subjective sense of their own performance remains largely intact.
They feel fine. They are working. The quality of their judgment is quietly eroding.
This is not a hypothetical. It is a documented pattern in professions that have undergone rapid automation of lower-demand tasks: air traffic control, radiological diagnosis, financial analysis. In each case, the removal of routine work increased the average cognitive intensity of what remained. In each case, fatigue-related performance degradation appeared in contexts and at timescales that the previous workload models had not predicted.
The meeting problem
There is a specific manifestation of this dynamic that deserves its own attention.
In a pre-AI organization, the day had a natural rhythm of cognitive demand. High-intensity work alternated with lower-intensity work, not because anyone designed it that way, but because the nature of the tasks required it. Writing code was intense. Running the build was not. Thinking through an architecture was intense. Writing the boilerplate was not.
In an AI-augmented organization, the lower-intensity work is handled by AI. The human’s calendar fills with what’s left: complex decisions, difficult conversations, ambiguous problems, strategic alignment. These are, almost without exception, high-demand cognitive activities. They stack. The day becomes a sequence of situations that each require the kind of sustained directed attention that Kaplan’s research identified as the most rapidly depleting cognitive mode.
The result is not just fatigue. It’s a specific type of cognitive congestion where each subsequent high-demand task is being approached with a directed attention resource that has not had the opportunity to restore. The meeting about architectural strategy at 4pm is being handled by a mind that has been running at near-maximum intensity since 9am with very few genuine breaks, because the tasks that used to provide those breaks no longer exist.
This is being logged in retrospective surveys as: “I feel like I’m in meetings all day.” The meetings haven’t increased. The cognitive weight per hour of non-meeting time has increased so much that even the same number of meetings now feels like saturation.
What elite sports figured out that knowledge work hasn’t
Periodization is one of the most well-established principles in athletic training. The core idea is simple: adaptation and performance improvement don’t happen during high-intensity work. They happen during recovery from high-intensity work. A training program that runs at maximum intensity every session does not produce a stronger athlete. It produces an injured one.
The same principle operates in cognitive performance. Research on expert musicians by Ericsson, whose work appeared in the previous piece in this series, found that the best performers didn’t just practice more. They structured their practice with explicit recovery built in: hard sessions followed by easier ones, deliberate work alternating with periods of low-demand activity that allowed consolidation and restoration ⁴.
The knowledge work version of this principle has always operated informally and largely unconsciously. Nobody scheduled “cognitive recovery time.” It happened because the nature of the work included tasks of varying demand, and the lower-demand tasks functioned as recovery whether anyone recognized them as such or not.
AI has removed that informal structure without replacing it with anything. The equivalent would be an athletic coach who, upon discovering that warmup and cooldown periods weren’t producing direct performance gains, eliminated them to increase training efficiency. The performance metrics would improve briefly. The injury rate would follow shortly after.
The industry is in the brief improvement phase. The injury rate is starting to show up in retention numbers and in the quiet proliferation of “I’m exhausted but I don’t know why” conversations that are becoming common in engineering teams.
The measurement problem
Everything described above is real, observable, and in most organizations, completely unmeasured.
What gets measured: output per hour, feature velocity, time to completion, lines of code reviewed, tickets closed. All of these go up with AI assistance. All of these are captured in the productivity narrative that justifies continued AI adoption and, increasingly, reduced headcount.
What doesn’t get measured: cognitive intensity per hour, sustained attention depletion curves, decision quality degradation over the course of a day, the degree to which the work remaining after AI assistance represents a qualitatively different cognitive load profile than the work it replaced.
The mismatch between what’s measured and what’s happening is not accidental. It reflects a broader pattern in how organizations think about knowledge work: inputs and outputs are legible, the internal state of the person doing the work is not. This has always been true. AI has made it more consequential, because it has changed the internal state in ways that the existing measurement systems are structurally unable to detect.
The senior engineer who is producing more output than two years ago, working on genuinely interesting problems with more autonomy, and quietly running out of cognitive capacity by Wednesday afternoon: they don’t show up as a problem in any dashboard. They show up, eventually, as a resignation letter.
The honest version
The productivity gains from AI in knowledge work are real. This piece is not arguing otherwise.
What it is arguing is that those gains are being measured in a framework that cannot see their costs. The cost is not in output. It’s in the cognitive load profile of the work that remains: denser, more sustained, less varied, stripped of the low-demand texture that used to function as distributed recovery.
The people doing that work are not producing less. They are depleting faster. And because the depletion is cognitive rather than physical, because it doesn’t feel like exhaustion in the way a hard physical day does, because the work is interesting and the autonomy is real, it goes unrecognized until it has progressed far enough to become a retention problem or a decision quality problem or both.
Organizations that are serious about this will start designing cognitive load variation back into the structure of the workday, not as a wellness initiative, but as a performance architecture decision. Deliberate variation in task intensity. Protected time for lower-demand work. Explicit attention to the rhythm of the day and not just the output of the day.
That requires treating the human doing the work as a system with recovery requirements, not as a throughput variable with AI as a performance multiplier.
The ones that figure this out first will have a significant and largely invisible competitive advantage: their people will still be thinking clearly on Friday afternoon.
References
¹ Kaplan, S. (1995). The Restorative Benefits of Nature: Toward an Integrative Framework. *Journal of Environmental Psychology*, 15(3), 169–182.
² Baumeister, R. F., Bratslavsky, E., Muraven, M., & Tice, D. M. (1998). Ego Depletion: Is the Active Self a Limited Resource? *Journal of Personality and Social Psychology*, 74(5), 1252–1265.
³ Hacker, W. (1998). *Allgemeine Arbeitspsychologie: Psychische Regulation von Arbeitstätigkeiten*. Huber.
⁴ Ericsson, K. A., Krampe, R. T., & Tesch-Romer, C. (1993). The Role of Deliberate Practice in the Acquisition of Expert Performance. *Psychological Review*, 100(3), 363–406.