Deep Work vs. the Cyborg Hyperactive Cracked-Out Agent Allocator

10 min read Original article ↗

Cal Newport, for as far as I can tell his entire professional life, has been focused on focus. He is the most notable advocate for deep work - “Professional activities performed in a state of distraction-free concentration that push your cognitive capabilities to their limit.” He’s written three books - So Good they Can’t Ignore You, Digital Minimalism, and Deep Work - which all (to different degrees) make the case for strategically choosing to be great at your chosen field through limiting distraction and deep focus, as a craftsman might.

In the books he gives examples from software engineering, research, writing, and other fields where success—both recognition and economics—is disproportionately accrued by the deeply focused. For instance, the difference between a mediocre software engineer and a great engineer is huge, with returns splitting bimodally between the best and the rest.

If you’re not comfortable going deep for extended periods of time, it’ll be difficult to get your performance to the peak levels of quality and quantity increasingly necessary to thrive professionally.

Deep Work lays out this core argument right at the start:

The Deep Work Hypothesis: The ability to perform deep work is becoming increasingly rare at exactly the same time it is becoming increasingly valuable in our economy. As a consequence, the few who cultivate this skill, and then make it the core of their working life, will thrive.

I think of him - and I truly mean this as a compliment - as a spiritual leader, creating philosophies that people can build their life around for success and fulfillment in the modern economy. I’ve found inspiration from his ideas, and returned to them many times (even as I often fail to live up to their commandments)

However, the gods tend to be fickle, the gods of the marketplace in particular, and one decade’s monastic practice for greatness can quickly become next decade’s competitive disadvantage. The ability to perform deep work is becoming less rare and less valuable at exactly the same time—because the agents can do it for you.

In a recent essay, Steve Newman profiles an emerging work practice of constant, frenzied management of swarms of AI agents to do what might have been once focused work:

Afra Wang [recounts] Liu Xiaopai, a Chinese programmer who is using AI tools to crank out product after product. Working mostly on his own, he currently has “one or two dozen revenue-generating products” and reports clearing over $1,000,000/year in profit. By contrast, a typical startup requires many people to build and maintain a single product.

A linked article from Sam Shillace, a manager of the Amplifier team which builds skills that AI agents can use:

All of these teams are overwhelmed with ideas now - that’s a common hallmark, because they are so productive that the new bottleneck is human attention. It’s common to have 5-10 processes running in parallel, and API spend is routinely hundreds of dollars a day (one team has a goal of getting to a thousand).

Newman describes this philosophy of ‘hyperproductivity’ as having two tenets:

A hyperproductive individual does not do their job; they delegate that to AI. They spend their time optimizing the AI to do their job better.

A hyperproductive individual may also spend time deciding what the AI should do, but that represents a failure to fully delegate.

Reportedly these individuals/small teams are using different agents to scale the work output, instead of going deep themselves.

The human role is to be a simultaneous manager, tutor, and genetic engineer for a squad of tireless, but sometimes clueless, agents. Each agent needs to be kept busy with tasks, and those tasks need to be coordinated so as to prevent one agent from interfering with another’s work. At the same time, the hyperproductive worker is constantly evaluating their every move (and every move taken by their agents) to see whether it could be done more efficiently.

So it looks like ensuring the AI agents are always on task and not distracted, even if the human is in a constant state of moderate distraction managing all of this.

Another example: a tweet I saved of Steve Yegge’s slightly terrifying dashboard of coding agents:

Please note that Steve Yegge is not some new grad who has never coded before and is excited that he got a Todo app running - besides the million+ lines of code he’s written, and his senior roles at Google and Amazon, he also has some great rants about the industry.

Some of this might be hyped up reports, but I personally saw researchers/engineers in a research fellowship I was running execute crazy multi-agent workflows to make impressive prototypes and applications.

This is sacrilege in the temple of Deep Work; tables should be flipped.

In fact, this all looks more like that other form of work Newport describes in Deep Work in the chapter titled “What About Jack Dorsey”:

“Dorsey reports, for example, that he ends the average day with thirty to forty sets of meeting notes that he reviews and filters at night.”

The necessity of distraction in these executives’ work lives is highly specific to their particular jobs. A good chief executive is essentially a hard-to-automate decision engine, not unlike IBM’s Jeopardy!-playing Watson system. They have built up a hard-won repository of experience and have honed and proved an instinct for their market.

Deep work is not the only skill valuable in our economy, and it’s possible to do well without fostering this ability, but the niches where this is advisable are increasingly rare. Unless you have strong evidence that distraction is important for your specific profession, you’re best served, for the reasons argued earlier in this chapter, by giving serious consideration to depth.”

We can call this “executive work”. Newport’s claim was that this mode worked for CEOs because they had staff and whose main work is to get “inputs throughout the day, in the form of e-mails, meetings, site visits, and the like - that they must process and act on”, but that this couldn’t necessarily be extrapolated to other jobs.

This sounds a lot like managing a swarm of agents, and I posit that with everyone having access to capable AI agents, the economic landscape is shifting such that the niches where the executive model is valuable are growing rapidly, as evaluation of work, feedback, and context-switching become more useful.

While the current examples of ‘hyperproductivity’ (to be honest I dislike the name, please reference the title of the post for the true name) are in software engineering or entrepreneurship, those domains cover or neighbor many deep work territories. And while the Yegge example of working on a mature codebase isn’t definitive, the rate at which software agents are improving suggests long-running continuous agents will become increasingly capable.

Cal Newport has written about trends in AI and work - in May 2025 he outlines being skeptical that agents will live up to the hype, arguing that pre-training scaling has faltered, reinforcement learning tuning is “piecemeal and hit-or-miss”, and that we’re not on a trajectory to good agents.

I don’t think this prediction was the right bet at the time, and it doesn’t look very good at the tail end of 2025, with Gemini 3 showing pre-training scaling improvements, and Opus 4.5 and GPT5.1 all improving on coding benchmarks. Just for example, from Anthropic’s Opus 4.5 release this week:

We give prospective performance engineering candidates a notoriously difficult take-home exam. We also test new models on this exam as an internal benchmark. Within our prescribed 2-hour time limit, Claude Opus 4.5 scored higher than any human candidate ever.

The agents are good enough now that, from reports like the above, the pattern of work seems to be changing away from deep work, towards orchestrating fleets of increasingly autonomous agents.

Here’s where I want to pause and say: maybe. Here are some reasons this might not be true:

Maybe deep work is still required for breakthrough thinking. There’s a long-running debate about whether declining scientific productivity stems from increasingly big teams and projects—perhaps a single individual can’t keep an entire idea “loaded into memory,” and thus can’t draw connections or generate new concepts. Parallel execution across agents might make this harder, not easier.

Maybe this will just create more distraction, increasing the returns to focus. In the METR ‘Downlift’ study (where using AI coding tools surprisingly slowed down devs), one of the developers reported that AI tools caused them to be distracted and focus on less valuable work:

Literally any dev can attest to the satisfaction from finally debugging a thorny issue. LLMs are a big dopamine shortcut button that may one-shot your problem. Do you keep pressing the button that has a 1% chance of fixing everything? It’s a lot more enjoyable than the grueling alternative, at least to me.

“As always, small digital hygiene steps help with this (website blockers, phone on DND, etc). Sorry to be a grampy, but it works for me :)”

Digital Minimalism lives!

Maybe you need deep work to develop the taste that makes you a good manager. Taste, which is nearby good judgement, is important to steer towards valuable activities and not waste time on slop. But verification is easier than generation. You don’t need to write a great novel to recognize one, and it seems likely that developing the taste to verify good work is easier than developing the ability to produce it yourself.

Maybe this is all hype. Industry and media are incentivized to proclaim a new era of work. But Steve Newman has been arguing against the likelihood of rapid transformation and automation of software engineering—he’s hardly a blind booster.

I suspect there’s some truth to all of these points. The METR study in particular was so surprising that I find it genuinely compelling as an argument against my thesis—and yet I’ve heard too many examples of experienced engineers using AI agents to truly believe it generalizes. Perhaps one needs to practice not being distracted by social media, to keep on top of the swarm?

But even if you’re skeptical that we’re at the moment now where the returns to managing and allocating agents outweigh deep work, keep in mind that deep work is a philosophical stance aimed at improving the craftsman’s human capital over years, decades; that’s a long time to bet against compounding AI capabilities.1

Also while writing this section I had the new Antigravity Gemini IDE/agent build a “well would you look at the time” agent dashboard while I asked Claude 4.5 to check the post for typos.

This is a post about is, not ought. I’ve spent a good portion of November in a deep work mode, writing, and I’ve loved it. I think there are many benefits to this mindset; it feels more like a type of human flourishing we want to encourage.

But it seems likely to be increasingly outcompeted by the cyborg hyperactive cracked-out agent allocator.

We will need to develop philosophies and ways of being that promote thriving in this mode:

  • Getting better at context switching, instead of avoiding it.

  • Focusing on verification of work, instead of generation.

  • Developing delegation skills and taste, rather than raw execution ability.

I’m not sure who will write that book. But I suspect we’ll need it soon.

Discussion about this post

Ready for more?