The Inversion: What Remains of Research When the Machine Can Ideate?

12 min read Original article ↗

A few months ago, I asked an AI to sketch a distributed consensus protocol for a problem I’d been puzzling over. In 12 seconds, it produced something structurally similar to what a capable Ph.D. student might develop over several weeks—not identical to what I had in mind, but plausible, and in some ways more elegant. I sat with that for a while. Not because the output was perfect (it wasn’t), but because of what it implied about the nature of the work I thought I was doing. Over the winter break, I used Claude to solve an entire research problem at the intersection of architecture, security, privacy, and economics.

We are, I think, at an inflection point that many haven’t fully processed. And I suspect part of the reason is that doing so honestly requires confronting some uncomfortable truths about what we academic researchers thought made us valuable.

𝗧𝗵𝗲 𝗦𝗲𝗹𝗳-𝗜𝗺𝗮𝗴𝗲 𝗪𝗲’𝗿𝗲 𝗣𝗿𝗼𝘁𝗲𝗰𝘁𝗶𝗻𝗴

Let’s be honest about the narrative we tell ourselves. The academic researcher is the brilliant mind—the person who sees what others don’t, who synthesizes across domains, who has the creative spark that produces genuine insight. We are not mere laborers; we are thinkers. The university exists, in part, to shelter and cultivate this rare capacity for deep intellectual work.

This identity is deeply woven into how we see ourselves, how we hire, how we tenure, how we organize the entire enterprise of research. The Ph.D. is a certification of 𝘣𝘳𝘪𝘭𝘭𝘪𝘢𝘯𝘤𝘦, not just competence. We don’t train technicians; we cultivate minds.

What if this self-image is about to collide with reality in the same way chess grandmasters’ self-image did?

𝗧𝗵𝗲 𝗖𝗵𝗲𝘀𝘀 𝗣𝗮𝗿𝗮𝗹𝗹𝗲𝗹 𝗪𝗲 𝗞𝗲𝗲𝗽 𝗜𝗴𝗻𝗼𝗿𝗶𝗻𝗴

Twenty years ago, if you told chess grandmasters that a laptop would soon beat the world champion—not occasionally, but trivially, every single time—they would have found it difficult to believe. Chess was the quintessential intellectual game. It required creativity, intuition, the ability to see patterns invisible to ordinary minds. It was, in some sense, a proof of human cognitive exceptionalism.

And then it wasn’t.

Today, Magnus Carlsen—arguably the greatest chess player in human history—would lose to a chess engine running on your phone. Not sometimes. Every time. The gap isn’t close; it’s embarrassing. A $50 piece of software plays chess better than any human who has ever lived or ever will live.

What happened to chess culture? It adapted. Humans still play each other, and we find it meaningful. But no one pretends anymore that human chess represents the pinnacle of chess capability. The locus of “best chess thinking” moved from human brains to silicon, and it’s never coming back.

Are we watching the same thing happen to research?

𝗧𝗵𝗲 𝗘𝗿𝗱ő𝘀 𝗣𝗿𝗼𝗯𝗹𝗲𝗺𝘀: 𝗔 𝗪𝗮𝗿𝗻𝗶𝗻𝗴 𝗦𝗵𝗼𝘁

Here’s something that should unsettle us: GenAI systems are now solving problems from Paul Erdős’s famous list of problems—not by finding existing solutions in the literature (as earlier, debunked claims suggested), but by generating original proofs that have been formally verified in Lean and accepted by Terence Tao.

Tao himself offers the important caveat: these are the “long tail” of Erdős problems—the more accessible ones, solvable with known techniques. The hardest problems remain beyond reach.

But notice what this caveat concedes: AI systems are now capable of generating original mathematical arguments, applying established methods to open problems, and producing formally verifiable proofs. Five years ago this was science fiction. The frontier has moved. The question is only how far and how fast it continues to move.

If machines can do that, what exactly is the hard part that we’re contributing?

𝗧𝗵𝗲 𝗨𝗻𝗰𝗼𝗺𝗳𝗼𝗿𝘁𝗮𝗯𝗹𝗲 𝗜𝗻𝘃𝗲𝗿𝘀𝗶𝗼𝗻

Here is the pattern I keep encountering:

𝗦𝗰𝗲𝗻𝗮𝗿𝗶𝗼 𝗔: I describe a research problem to Claude. It generates three candidate approaches, complete with pseudocode, anticipated failure modes, and suggested experimental setups. The “creative” work—the ideation, the “what if we tried X,” the part we told ourselves required years of training and rare cognitive gifts—took seconds.

𝗦𝗰𝗲𝗻𝗮𝗿𝗶𝗼 𝗕: I ask it to implement a complex systems prototype involving specific hardware quirks, obscure kernel interfaces, and genuine engineering judgment under uncertainty. It struggles, hallucinates, produces something unusable. The human still has to do it.

The irony is sharp: the generative work, the brilliant-insight work, the “this is why I got a Ph.D.” work—this is increasingly commoditized. What remains stubbornly human is often the mechanical work: the debugging, the cluster babysitting, the getting-RDMA-to-actually-work, the running of human subjects, the schmoozing of program committees.

We romanticized creativity as the hard part. It turns out creativity may have merely been the expensive part, and expense is not the same as difficulty. Now that cost is collapsing toward $200/month for a Claude Pro subscription.

𝗧𝗵𝗲 𝗡𝗲𝘄 𝗦𝗰𝗮𝗿𝗰𝗶𝘁𝘆: 𝗦𝗵𝗼𝘄𝗺𝗮𝗻𝘀𝗵𝗶𝗽 𝗢𝘃𝗲𝗿 𝗦𝘂𝗯𝘀𝘁𝗮𝗻𝗰𝗲?

If ideation is cheap, what becomes expensive?

I worry the answer is: salesmanship. The ability to frame, to pitch, to build narrative, to network, to position, to market. The skills of the TED-talk circuit, not the skills of the brilliant genius.

This has always been part of academia—we all know researchers whose success outpaces their intellectual contributions, and vice versa. But there was at least a floor: you needed genuine ideas to sell. The showmanship was necessary but not sufficient.

What happens when the ideas themselves become commodity inputs? When anyone with a good prompt and a subscription can generate a plausible research direction? And novel solutions for it.

The terrifying possibility: the hard part of academic success becomes being the kind of person who’s good at promoting ideas, not generating them. The brilliant introvert who sees deeply but sells poorly becomes even more disadvantaged. The charismatic networker with good taste in other people’s AI-generated ideas thrives.

Is this the future we want? More importantly: is this the future that’s coming whether we want it or not?

𝗪𝗵𝗮𝘁 𝗜𝘀 𝗮 ‘𝗖𝗼𝗻𝘁𝗿𝗶𝗯𝘂𝘁𝗶𝗼𝗻’ 𝗡𝗼𝘄?

Consider a systems paper. Traditionally, the contribution might be: “We identified that X is a bottleneck, proposed technique Y to address it, demonstrated Z% improvement, and analyzed the tradeoffs.” The student who did this work developed taste, judgment, and deep familiarity with the problem space.

But if an AI can generate plausible Y’s on demand—and increasingly it can—the contribution shifts. Perhaps it becomes:

● 𝗣𝗿𝗼𝗯𝗹𝗲𝗺 𝘀𝗲𝗹𝗲𝗰𝘁𝗶𝗼𝗻: Knowing which questions matter (though AIs are improving here too)

● 𝗧𝗮𝘀𝘁𝗲 𝗮𝗻𝗱 𝗰𝘂𝗿𝗮𝘁𝗶𝗼𝗻: Recognizing which of the AI’s suggestions are actually good

● 𝗩𝗮𝗹𝗶𝗱𝗮𝘁𝗶𝗼𝗻 𝗿𝗶𝗴𝗼𝗿: Ensuring the idea actually works, not just that it sounds good

● 𝗡𝗮𝗿𝗿𝗮𝘁𝗶𝘃𝗲 𝗮𝗻𝗱 𝗽𝗼𝘀𝗶𝘁𝗶𝗼𝗻𝗶𝗻𝗴: Making the work legible and important to the community

Notice what’s happened: the intellectual core has migrated from generation to curation and presentation. The researcher becomes less architect and more gallery owner—selecting, contextualizing, and selling work rather than creating it.

Is this still “research”? Or is it something else wearing research’s clothes?

𝗧𝗵𝗲 𝗚𝗿𝗮𝗱𝘂𝗮𝘁𝗲 𝗦𝘁𝘂𝗱𝗲𝗻𝘁 𝗤𝘂𝗲𝘀𝘁𝗶𝗼𝗻

Let me pose this starkly: Do we need graduate students to do research, or do we need graduate training to produce the next generation of researchers?

These sound similar but may have diverged.

If the goal is producing research outputs, the calculus is changing fast. A senior researcher with good judgment, armed with AI tools, may be able to explore a problem space that previously required a small team. The AI doesn’t get discouraged, doesn’t need mentorship, doesn’t have qualifying exams, doesn’t need healthcare or a stipend. It’s not a researcher, but it’s an increasingly capable research instrument.

If the goal is producing researchers—people with deep intuition, the ability to frame problems, the judgment to know what matters—then we still need some process for developing that. But the current Ph.D. model was designed for a world where the doing of research was inseparable from the learning of research. What happens when AI compresses the “doing” part?

There’s a possible future where:

● Graduate training becomes shorter, more focused on taste and judgment than technique

● The “apprenticeship” model shifts from “learn by doing extensive labor” to “learn by evaluating and directing”

● Research groups look less like faculty + students and more like faculty + AI + a smaller number of highly skilled integrators

● The Ph.D. becomes less about “I did original research” and more about “I learned to orchestrate AI-augmented research”

I’m not sure this future is good. But I suspect it’s coming, and we’re not preparing for it. This is particularly critical for early and mid-career researchers building skills for a world that may not exist in five years, and for institutions interviewing and hiring for capabilities that may soon be automated.

𝗧𝗵𝗲 𝗗𝗲𝗻𝗶𝗮𝗹 𝗪𝗲 𝗖𝗮𝗻’𝘁 𝗔𝗳𝗳𝗼𝗿𝗱

Here’s what I think is happening: we are in denial.

We tell ourselves that AI is “just a tool,” that it “can’t really understand,” that there’s something ineffable about human creativity that will always remain beyond its reach. We told ourselves the same things about chess. We were wrong.

We tell ourselves that our judgment, our taste, our ability to ask the right questions—these are the truly hard things. And maybe they are, for now. But “for now” is doing a lot of work in that sentence, and the trend lines are not encouraging.

I can already hear the objection: “But LLMs still need human guidance! They hallucinate! Human judgment is the real value!” This is clutching at straws. Yes, weaker models like ChatGPT often need hand-holding—careful prompting, fact-checking, iterative refinement. But the frontier has moved. Systems like Claude Opus and Gemini Pro are in a different league entirely. With these tools, the hard and creative parts of research—the ideation, the novel connections, the “what if we tried X”—are a prompt away from a $200 subscription. The gap between “AI needs human guidance” and “AI occasionally needs human correction” is narrowing fast. Betting your career on that gap staying wide is not a strategy; it’s a prayer.

We focus on AI’s current limitations—the hallucinations, the lack of true understanding, the inability to do certain kinds of reasoning—as if these are permanent features rather than temporary limitations of a technology improving faster than most of us expected. Most who cite these limitations haven’t explored the full capabilities of current systems (and are still using ChatGPT). You can interpret that sentence as you see fit.

And underneath it all, I suspect, is a deep reluctance to confront what is would mean for our self-conception if the “brilliant mind” part of being an academic became something a subscription service could approximate. We have built identities and institutions and hiring criteria and status hierarchies around the idea that what we do is cognitively rare. What happens when it isn’t?

𝗪𝗵𝗮𝘁 𝗜𝘀 𝗜𝗻𝘁𝗲𝗹𝗹𝗲𝗰𝘁𝘂𝗮𝗹 𝗟𝗲𝗮𝗱𝗲𝗿𝘀𝗵𝗶𝗽 𝗡𝗼𝘄?

If AI can generate ideas, and if implementation is either AI-assisted or relegated to (devalued?) mechanical labor, what is intellectual leadership in research?

Some possibilities:

1. 𝗧𝗮𝘀𝘁𝗲 𝗮𝘁 𝘀𝗰𝗮𝗹𝗲: The ability to evaluate many AI-generated ideas quickly and identify the genuinely promising ones

2. 𝗣𝗿𝗼𝗯𝗹𝗲𝗺 𝗼𝗿𝗶𝗴𝗶𝗻𝗮𝘁𝗶𝗼𝗻: Finding the questions worth asking in the first place (though AI is encroaching here)

3. 𝗡𝗮𝗿𝗿𝗮𝘁𝗶𝘃𝗲 𝗰𝗼𝗻𝘀𝘁𝗿𝘂𝗰𝘁𝗶𝗼𝗻: Weaving results into stories that matter to the field

4. 𝗖𝗼𝗺𝗺𝘂𝗻𝗶𝘁𝘆 𝗮𝗻𝗱 𝗱𝗶𝗿𝗲𝗰𝘁𝗶𝗼𝗻-𝘀𝗲𝘁𝘁𝗶𝗻𝗴: The fundamentally social work of deciding what a field cares about

5. 𝗜𝗻𝘁𝗲𝗴𝗿𝗮𝘁𝗶𝗼𝗻 𝗮𝗻𝗱 𝘀𝘆𝗻𝘁𝗵𝗲𝘀𝗶𝘀: Connecting ideas across domains in ways that require broad knowledge

Notice that these are more curatorial, social, and political than generative. The intellectual leader becomes less the person who has the brilliant ideas and more the person who recognizes, validates, positions, and promotes them.

This is, perhaps, what intellectual leadership has always been, and we were just telling ourselves a flattering story about the primacy of individual genius. Or perhaps something real is being lost, and we should mourn it even as we adapt.

𝗤𝘂𝗲𝘀𝘁𝗶𝗼𝗻𝘀 𝗜 𝗗𝗼𝗻’𝘁 𝗛𝗮𝘃𝗲 𝗔𝗻𝘀𝘄𝗲𝗿𝘀 𝗧𝗼

I’ll end with the questions that keep me up:

● If a student uses AI to generate their core research idea, is it their idea? Does it matter? How would we even know?

● What does “training” someone to do research mean when the doing is increasingly automated?

● Are we preparing a generation of Ph.D. students for a model of research that won’t exist by the time they graduate?

● If the human role is increasingly taste, judgment, and salesmanship, how do we teach that? Do we even want to?

● What happens to the intrinsic motivation of research when the struggle—the part that makes breakthrough meaningful—is compressed or eliminated?

● In a world of abundant idea generation, does success shift entirely to promotion and positioning? And if so, is that academia anymore, or is it something else? Or is this what academia always was?

● Are we, like chess grandmasters in 2005, about to experience a fundamental demotion in our sense of what we are and what we’re for?

I don’t think the answer is to resist these tools or pretend they don’t change things. But I do think we owe it to ourselves—and especially to the students whose careers we’re shaping—to stop denying what’s happening.

The machines can ideate now. The brilliant-mind theory of the academic researcher is being tested, and it may not survive. The question is: what do we do next, and who do we become?

I welcome your thoughts, disagreements, and especially your discomfort. If this doesn’t unsettle you at least a little bit, I’m not sure you’re paying attention.

𝗙𝘂𝗿𝘁𝗵𝗲𝗿 𝗥𝗲𝗮𝗱𝗶𝗻𝗴: Ethan Mollick’s excellent work on the Jagged Frontier (https://www.oneusefulthing.org/p/the-shape-of-ai-jaggedness-bottlenecks) provides additional context on AI capability patterns

This post originally appeared on LinkedIn.

Karu Sankaralingam

Karu Sankaralingam is Mark D. Hill and David Wood Professor at the University of Wisconsin-Madison. An entrepreneur and an inventor, he is an IEEE Fellow who in 2017 founded SimpleMachines. Sankaralingam pioneered the principles of dataflow computing, focusing on the role of architecture, microarchitecture, and the compiler. He has published more than 100 research papers (including nine that received awards), has graduated 10 Ph.D. students, and is listed as an inventor on 21 patents.