Below is a longer, revised form of what I submitted for this round table in The Chronicle of Higher Education, which includes great submissions from interesting people I admire like Arvind Narayanan, Zeynep Tufekci, and Ian Bogost.
“Automation is a method that removes the need for human beings to act like cogs in a machine,” the anthropologist Margaret Mead wrote in a 1963 advice column. She was responding to a reader who asked whether “instead of freeing man’s spirit, all these engineering triumphs are simply dulling it?”
Mead’s point was a simple one. Automation of “routine tasks” by intelligent machines in the future would provide “time to think, to paint, to pray, to philosophize, to observe, to study the universe.” In short: to be more human.
Mead was imagining machines that replace the “drudgery” of tasks like “carrying loads of bricks.” She did not foresee our present situation: a world in which machines are still pretty bad at carrying bricks — but surprisingly good at writing advice columns in the style of Margaret Mead.
Now that we have machines capable, at the very least, of pretending to philosophize, to observe, and to study, we in higher education need to reassess what counts as intellectual drudgery. Mead’s instinct in 1963 was to look to the past, and specifically to the first machine age of the 19th century, to think through the implications for today. In this I believe she was absolutely right. After all, we have already automated away numerous intellectual tasks that were once highly prized: how many of us today are able to maintain double-entry account books in clean cursive? How many need to? Yet this was once a skill that was considered integral to success in both business and scholarship.

On the other hand, replacing lined paper and cursive with an Excel spreadsheet is very different from replacing the creative, personal, human decisions at the core of research and learning. What are those decisions? For a historian like me, they include: the choice of what question to ask, the choice of what sources to read, and above all, the thousand tiny, unconscious acts of attention which lead a researcher toward a certain set of texts, a specific group of people, or a singular set of themes, as opposed to the myriad other options available.
And this points to the central problem with LLMs as tools for research and learning. They lead, inexorably, to the average. As I’ve written elsewhere:
The issue is that generative AI systems don’t want messy perspective jumps. They want the median, the average, the most widely-approved of viewpoint on an issue, a kind of soft-focus perspective that is the exact opposite of how a good historian should be thinking.
So what are we to do?
I have believed since 2022, and still believe, that generative AI models have considerable potential as tools for augmenting traditional humanistic research. Two use cases that deserve special attention:
1) classifying, sorting, and otherwise extracting metadata from large corpora of public domain historical sources (one example, appropriate to the season: you can provide an LLM with a book-length religious text in 17th century Latin, and ask it to output a classification of every demon and djinn referenced therein).
And 2) automated transcription of manuscript documents (but only if this is done by historians trained in paleography and able to spot check the results).
Importantly, these two use cases of LLMs are not just enormously helpful for historians. They also suggest new pathways for teaching a new set of broadly useful skills in humanities classes. Learning how to apply novel tools to sorting and processing data — while also, crucially, being able to critically analyze the limits of that data, such as the biases built into it and the negative space around it — has applications far beyond writing a term paper.
Historians and other humanists thus have a rare opportunity to contribute in a meaningful way to the development and teaching of a new set of skills directly relevant to their discipline but also applicable to the world beyond it.
Such opportunities do not come along very often.
The issue, of course, is that many students are not using LLMs simply to augment human creativity and research, but to replace it. Offloading the act of thinking is the dystopian inverse of the automation of “drudgery” that Margaret Mead wrote about in the 1960s.
This is why I believe that generative AI probably should not be used in K-12 classrooms at all. It is vitally important for students in the early stages of learning to confront intellectual challenges — yes, even drudgery. They have to know what it feels like to think through a complex problem or write a long research paper without the intervention of AI tools.
In the context of university classes, the path forward is challenging — and also, I think, potentially both fun and intellectually rewarding. My personal solution to this state of affairs has been to teach it. I ask students to read and reflect on debates around automation and mechanized minds from the past. I ask them to think about their own K-12 experiences with digital learning tools. And I also ask them to envision the kind of world they want to be adults in, the kind of university education they want to have.
Offloading their creativity to a machine, I remind them, is not just cheating them out of real learning: it is boring.
On the other hand, there are plenty of genuinely new things that students and researchers can do with AI tools which expand, rather than duplicate, our skill repertoire. (One example: for a post next week on the history of gesture, I used Claude Code to create an interactive map of the information in this scientific article about “semantic bias” relating to words for left and right).

In my classes this quarter, I am offering students two alternatives for the final assignment. They can write a traditional research paper with the promise that they will not touch generative AI tools.
Or they can produce a “digital artifact” — a data visualization, digital humanities project, historical simulation, or even an educational game — using a combination of
1) original historical research,
2) a public domain data set that they compile and critically analyze, and
3) computer code generated by AI tools like Claude Code.
This is a new experiment, with uncertain results. But at the very least, I hope I can get them to remember that creativity, deep thought, and difficult reading are exactly the things that we once hoped automation would allow us to do more of — not less.
• “Giovanni Pico, count of Mirandola and Concordia, was 23 when he travelled to Rome to become an angel. It was 1487. Christendom’s most important priests would be there; the cleverest theologians would debate him. The pope would watch. Pico was going to dazzle them all. He planned to begin with a poetic, densely allusive speech, which almost no one would understand; then he would make nine hundred pronouncements, each more cryptic than the last, e.g. ‘251. The world’s craftsman is a hypercosmic soul’ and ‘385. No angel that has six wings ever changes’ and ‘784. Doing magic is nothing other than marrying the world.’” (Erin Maglaque in London Review of Books.)
• “The preceding two subsections built the argument that prior to the mass adoption of LLMs, work- ers’ written proposals functioned as Spence-like signals of effort, and in equilibrium, functioned as signals of worker ability. In this subsection, we present descriptive evidence that suggests that this signaling story breaks down in the post-LLM period.” Text and chart below are from Anaïs Galdin and Jesse Silbert, “Making Talk Cheap: Generative AI and Labor Market Signaling,” [pdf].
• Derek Thompson interviews the great Stanford historian Richard White on the parallels between 19th century railroads and 21st century AI: “The men who get immensely rich from this knew nothing about running railroads. What they know about is getting subsidies. What they know about is getting loans. And what they know about is draining these corporations of profits while leaving the cost to the stockholders and bondholders. They can do that because these railroads are incredibly corrupt. I mean, they don’t invent American corruption. It’s much older than that, but they bring it into its modern form. They’re the ones who create modern political lobbying.”
If you’ve made it this far, consider signing up for a paid subscription. This support makes Res Obscura possible.


