AI isn’t replacing radiologists. That was the title of Deena Mousa’s recent article1, much passed around tech social media. You see, back in 2012 there was a ChatGPT moment in the field of computer vision, AI that processes images, including medical scans. The model that was the cause célèbre then was AlexNet, developed by Alex Krizhevsky with other AI luminaries2. This model crushed existing best scores at a challenging task, a task that presented the AI with images of objects and asked it which of a thousand categories it belonged under.
Soon thereafter, other research groups released their own versions of AlexNet that incorporated similar AI architecture3. AI started beating humans handily at image classification tasks. In 2016, Geoff Hinton exhorted that “people should stop training radiologists now”4.
So how does the field of radiology look now? Deena Mousa reports:
“In 2025, American diagnostic radiology residency programs offered a record 1,208 positions across all radiology specialties, a four percent increase from 2024, and the field’s vacancy rates are at all-time highs. In 2025, radiology was the second-highest-paid medical specialty in the country, with an average income of $520,000, over 48 percent higher than the average salary in 2015.”
Nearly 10 years later, Hinton’s prediction has not come to pass. Mousa offers three reasons as explanation. Might similar reasons also keep lawyers busy 10 years hence post-ChatGPT?
An LLM may have seen a trillion words during training. Still, it is trivial to present it with a sequence of words it has never encountered before. Let me make up one such sequence: purple-haired pigs picked papayas. Similarly, one can give an AI access to the entire history of case law, and still, a good fraction of a lawyer’s clients will present with a scenario which does not have an exact analogue in that history.
Current top-tier AIs do handle inputs they have not seen before. This capability is called “generalization”, stemming from the AI generalizing what it has seen in its training data. Here is an example I used to test Chatlaw’s5 ability to generalize. I asked it to weigh in on a dispute between two people, a dispute involving a street cat. The plaintiff had trapped the cat in a carrier, but before she could retrieve it, the defendant intervened and released it. Plaintiff seeks compensation for the lost cat.
Chatlaw made the connection between this case and an 1805 New York case, a notable case that still stands. In Pierson v. Post6, the dispute was between someone hunting a fox, and an intervener who killed and took the fox for himself. The court in its opinion there established the principles of ferae naturae and bodily seisin. Chatlaw explained how these principles applied in the present issue, and surmised that the plaintiff had a strong case, because she established a form of possession over the cat by fully trapping it.
Generalization is almost the defining feature of AI. Without it, AI is just another computer program. A computer program typically does not work if what you input does not match what the programmer carefully designed for. But while AIs unlike simple computer programs do generalize, they are not reliably good at it. In comparison, humans are dramatically better7. Let’s look at another hypothetical lawsuit to explore this:
An actor in Manhattan walking to an audition is bumped by a Tesla while on a crosswalk. He takes a fall, but is not injured. He makes it to his audition, but being perturbed by the accident, performs poorly and loses the role. Can he successfully sue the Tesla driver for damages?
Running the above scenario on Chatlaw yields a negative: ‘New York courts are very reluctant to allow recovery for such “consequential” or “indirect” economic losses, especially when they are speculative.’8 But ... if the actor went to a good lawyer, that may not be the last word.
Our good lawyer might notice his client wearing an Apple Watch. The watch shows a large spike in the actor’s heart rate at the time of accident, and the heart rate stayed elevated through the audition. Secondly, our good lawyer realizes that her client’s story can be corroborated by video footage of the accident. Because Teslas have onboard cameras, cameras whose feed can be obtained by filing a document discovery motion. Add a deposition of the receptionist at the audition, to get testimony on how distressed the actor seemed, and there may now be a viable case.
Now, our good lawyer is doing more here than generalization. She is fully present in the world in a way AI isn’t, thereby noticing her client’s choice of wristwear. She is showing creativity in considering how elements of the incident might afford evidence that supports her case. Some of these elements (Teslas and Apple Watches) have little to no historical antecedent, and are thus not well-represented in AI training data. In the near-term, for disputes of consequence, using just AI for lawyering will be risky9.
There are always two or more parties involved in a litigation, and they have conflicting interests. When one side adds AI to their team, the others will be motivated to follow suit. Else they will be outgunned. The AI-assisted team can review more documents, file more motions, and contest more; all without racking up expensive attorney-hours.
What happens when all litigation teams use powerful AI? Well, the lawyers in each team are now back being the difference-makers. Whichever side’s lawyers complement and direct the AI most effectively, that’s the side with the edge.
Another way to look at this is to notice that AI-assisted litigation is not like self-driving cars. As self-driving AI gets better and better, at some point we might say it is ‘good enough’ and hand over the wheel to it entirely. In litigation, as your AI gets better and better, so does the other side’s. So there’s no finish line. If lawyers can improve upon the AI, you will always want them.
Though radiology is not adversarial like law, many other use cases are. Take the case of AI-assisted software engineering. On the face of it, this is not an adversarial context. But companies whose products and services derive from software engineering are in fierce competition in the free market. As one company ups its game via faster AI-assisted software improvements, its competitors will follow suit, and the rivalry continues. Again, the human software engineers who are most effective at directing and complementing AI will make the difference.
It turns out that radiologists spend only 36% of their time on average at interpreting scans10. They talk to patients, clinicians, technicians and radiology residents. They also decide what images to order, and experiment with improving scanning protocols.
Similarly, lawyering is more than doing legal research and writing motions, more than compiling interrogatories and reviewing documents. It is speaking with other attorneys. It is deciding which witnesses to depose and which documents to request and subpoena. It is discussing with the client to decide which litigation risks to take, and which compromises to accept. It is coaching the client on how to respond during deposition. It is speaking with the judge and court officials at hearings. It is coming up with and trying out novel legal theories, after exhausting well-worn ones.
If AI does speed up generation of legal work product, lawsuits can resolve more quickly and cheaply, leading to more lawsuits filed. This is a simple result of supply and demand economics11. Even as fewer lawyer hours are needed per lawsuit, there could be more demand for lawyer hours in toto. Only when demand becomes satiated, further productivity gains will lead to lawyer job losses.
James Bessen in his paper for Economic Policy12 analyzes this effect across three industries: US textile, steel, and automotive. Over 150 years of technological progress, the number of textile workers earners and iron & steel workers first went up, and then came down. On the other hand, over the past hundred years or so, the number of automotive workers has gone up, and then stayed flat. See chart below.
The difference comes from the fact that the demand elasticity of textiles and iron/steel has flattened whereas that for cars has not. As technology keeps improving, people remain interested in buying more cars or trading up for a higher quality car.
Lawyers may be at the start of a curve similar to the chart above. Even if the trajectory follows that of the two curves that eventually declined, we should see an increase in the number of lawyers over the coming 10 years. Maybe this is why there are more radiologists now than ever.
Even if increased demand outweighs efficiency gains in an industry due to technology, there could still be massive job losses. Because casual non-professionals could step in to meet the demand. Fellow Substack writer Nowfal Khadar makes this observation, citing the case of journalism in the 2000s13. The Internet era exploded consumption of online news; but professional journalists were increasingly sidelined by bloggers, influencers and YouTubers. This won’t happen to lawyers anytime soon because only a licensed attorney14 can represent a client in a lawsuit.
Someone involved in a lawsuit can represent themselves, as a pro se litigant, and use AI to guide them. But there are severe disadvantages here. A litigant can’t freely argue theories of the case and shape the narrative. Doing so could count as testimony or an admission. Certainly, the litigant can’t plead the Fifth on an issue and then make a point about it that advances their position. Until AI is allowed to speak in court on behalf of its client, a lawyer will be a must, at least in contentious and consequential lawsuits.
Separately, technology can be slow to be approved in official proceedings. In most court rooms, photography and recordings are still not allowed. Stenographers produce transcripts that then cost thousands of dollars to purchase. When might AI be allowed to listen in on a deposition, let alone in a courtroom?
In radiology, AI is facing the institutional constraint of medical malpractice insurance. The insurers will not cover fully autonomous AI diagnoses15. They worry that AI mistakes can happen at scale, with many thousands of scans affected by one faulty AI. This brings class-action scale liability. Lawyers use malpractice insurance too. AI-generated motions will likely need vetting and sign-off by a lawyer to be covered by insurance.
“To see human beings as objects is not to see them as they are, but to change what they are, by erasing the appearance through which they relate to one another as persons. It is to create a new kind of creature, a depersonalized human being ... In a very real sense, therefore, there cannot be a science of man.”16
So wrote the late philosopher Roger Scruton in 1996, while discussing the concept of freedom. In Scruton’s view, freedom is best understood through the lens of responsibility. He discusses the difference between a decision, and a prediction. A decision, such as ‘I shall go to the gym tonight’, attaches to a person, who bears the consequence of their decision, and is subject to praise and blame. A prediction, such as ‘I will probably go to the gym tonight, unless it rains’, attaches to an object, that is an element in a scientific model, that is subject merely to accuracy. Scruton goes on to note the peculiar nature of responsibility:
“The judgment of responsibility attaches an event ... to the person himself ... Not that your actions were the cause, but you were the cause.”17
Blame, responsibility, judgment—isn’t this that which the practice of law traffics in? Last fall, I sat in on the controversial trial of Daniel Penny, for the death of a homeless man he had placed in a chokehold while on the subway18. Penny was charged with second-degree manslaughter, and also criminally negligent homicide. Murder, manslaughter, and criminally negligent homicide: these are the wrongful death categories in New York Penal Law. They distinguish mental states—intent, recklessness and negligence—and the degree of indifference to human life. I observed the prosecutor and the defense attorney examine witnesses in ways to ‘score points’ with jurors, assigning and dodging blame and responsibility. The practice of law is the closest thing we have to the science of the human person.
And the human person ticks at the tempo of the human heart. A criminal defendant needs time to decide whether to accept a plea deal. A lawyer needs time to decide if he wants to take on a particular case. A judge needs to sleep on an appellate brief before making a decision. Decisions such as these are fundamentally different from, say, profit-maximizing investment decisions. AI may provide useful numbers, such as probabilities of success and range of sentencing outcomes; but these just describe the objective situation.
A human eventually needs to make the final subjective call. Taking into consideration all the relevant inputs, one has to sit and meditate on the issue. Friends and family can help with this, but an experienced lawyer is best-placed to provide decisive advice. Much like a friend, a lawyer can get to know his client—even if just over the course of litigation—and develop trust. Unlike the typical friend, the lawyer has seen similar legal scenarios play out, and how they affect people with his client’s temperament.
Offering consequential legal advice is a heavy responsibility. An AI can offer advice, even consequential advice, but not responsibly. For responsibility is something that inheres only in members of a moral community, and we do not yet accept AI as a member. A good lawyer friend of mine still gets calls from his past clients when they get into fresh legal trouble. Because he is the one they trust.

