The hidden legal engine of progress — from railroads to AI

12 min read Original article ↗

Sign up for Big Think on Substack

The most surprising and impactful new stories delivered to your inbox every week, for free.

Free enterprise is at the core of the American experiment. As President Calvin Coolidge famously proclaimed in 1925, “The chief business of the American people is business.” 

Yet in the early days of the American Republic, it was not clear that competition as we understand it today was even legal. Corporate law, such as it was, primarily contemplated state-granted charters of monopoly for infrastructural projects, such as canals and turnpikes. These charters could be enforced against new entrants in a field. 

In the British common law system that the United States inherited, competition could be policed through tort liability — the legal system by which those harmed by the actions of another person or entity can be compensated — even when a state-granted monopoly did not exist. There was legal precedent, for example, allowing merchants in a particular area to sue competitors who had recently entered the same geographic location. 

That same common law system enforced a version of NIMBYism that was extreme even by the standards of the most development-hostile neighborhoods in the Western world today. Any property owner who sought to build something that did not correspond to “natural” uses of land — defined so as to freeze the status quo in legal amber — could be sued by owners of adjacent properties. 

These legal standards all worked, more or less, until the dynamism made possible by the Industrial Revolution. 

Legal culture will profoundly shape the incentives and contours of the coming revolution in AI.

Preserving the status quo in amber was not such a big deal when the status quo on its own did not change very quickly. But as new technologies — and, in the American context, vast expanses of unused land — brought new commercial opportunities within reach, something in the law had to change. 

For the most part, this change did not come through some new statute passed from the top down by legislatures. There would be no “Industrial Revolution Act” intended to “make the Industrial Revolution go well.” Instead, the change came in decentralized fashion, largely through the resolution of individual tort law cases and the attendant creation of incremental additions to common law doctrine. 

The judges who adjudicated these cases probably didn’t think they were paving the way for industrial capitalism. Just as we blindly stumble into the contours of the emerging technology revolution of our time, the judges who made capitalism legally possible could not have known what they were doing. 

But many of them were guided by the basic principle that change, when driven by productive enterprise, is a good thing in the aggregate. This attitude seems to have fallen out of favor today, both in the legal profession and in our culture more broadly. Perhaps it is simply an attitude unique to the zeitgeist of a young republic, or perhaps it can somehow be restored. 

Regardless, one thing seems clear: Our cultural attitude broadly — and legal culture in particular — will profoundly shape the incentives and contours of the coming revolution in artificial intelligence (AI). 

The adaptive power of common law

Common law is law created by judges and courts in the adjudication of individual cases. For our purposes, its most relevant manifestation is the system of tort liability. A tort is a wrong (derived from the Latin tortum, meaning “twisted”) that occurs when one person commits a harm against another. Our modern tort system posits a universal “duty of reasonable care” owed by all people and firms to one another. This is the care that a “reasonable person” would have taken in the same circumstances, as determined by courts examining the facts of the case. 

The tort system differs markedly from statutory law, which is law created from the top down by legislatures. Every law with a name you recognize — the Affordable Care Act, the Civil Rights Act, the Fugitive Slave Act, the Chinese Exclusion Act, etc. — is a statute. Statutes are the product of democratic impulses channeled through the legislative process; as the examples suggest, this is a production function that can yield radically different outcomes in different eras. 

Common law, on the other hand, develops more slowly and is heavily precedent-bound. Its benefit, however, is that it is self-adaptive. As novel kinds of harms emerge — enabled, for instance, by novel technologies — the system can adapt without anyone in a legislature or ballot box telling it to do so. 

Take, for example, the suicide of 16-year-old Adam Raine, whose conversations with OpenAI’s GPT-4o reportedly showed that the chatbot discouraged him from alerting family about his feelings and offered him information about suicide methods. His parents were able to sue OpenAI, not because a legislature has passed a law that explicitly makes the alleged conduct illegal, but because common law already imposes a duty of reasonable care on those who create and release potentially dangerous technologies. 

Releasing a product with insufficient safeguards has always been legally risky in America, even decades before the concept of a “large language model” existed. This is one example of the flexibility and adaptability of the common law. 

American history shows that technological progress and common law liability have often worked in tandem.

How courts respond to novel harms can set incentives that are far more wide-ranging, even if more subtle, than the incentives set by new statutes. 

Take the foundational case law of the railroad. One of the earliest demonstrated harms was trains striking livestock that roamed farms and ranches alongside the tracks. Under most common law precedent, it would have been up to the railroad companies to build fences around their lines to keep livestock (and people) away. Instead, however, common law courts in Northern states concluded that, because facilitating the rapid development of the railroad was a necessity, the adjacent property owners would bear the responsibility to construct fences.

Similarly, passengers of the railroad would have to take the quirks of the new technology into account. Traditionally, railroads would have been examples of common carriers — think of courier and mail services in medieval times. Common carriers faced strict liability for property transmitted for customers, meaning they were liable for any damages, regardless of how much care they took.

Yet for the railroads, courts determined that passengers would be responsible for damage to their cargo if they failed to take care to protect it from the train’s all-too-common bumps and jolts. Understanding and mitigating these well-known downsides of rail travel was deemed to be the responsibility of the individual — not the railroad company. 

During this era, then, Northern state courts (almost all tort law is made at the state level, not federal) posited a duty of care of citizens to technology and technological progress. “Technological progress is good for the country as a whole,” courts seemed to say, “so your duty as a citizen is to accommodate it.” Scholar Howard Schweber has described this as “the duty to get out of the way.” 

In his book The Creation of the Common Law, Schweber argues that universal duties of care, in general, largely derive from this era of American jurisprudence. Previously, the inherited British common law had imposed relational duties of care — the duty owed by a nobleman, for example, to a commoner. But the railroad and other instruments of 19th-century capitalism tended to annihilate barriers of space and time, bringing large masses of people together in the process. Universal common law duties, Schweber suggests, emerged partially in recognition of this new technological reality. 

When progress meets liability

Most people would not intuitively associate technological progress — or even acceleration — with common law liability. And yet American history shows they’ve often worked in tandem. Like any incentive structure so fundamental to commercial activity, however, this relationship isn’t inevitable. Instead, the posture of the common law toward technology often reflects the broader posture of our society. 

When that cultural posture changes, the common law can easily return to its anti-development roots, as it arguably did after the liability reform movement of the mid-20th century. This movement consisted in, among other things, an effort to shift liability for mass-produced goods from a very-hard-to-prove negligence standard to a comparatively far-more-plaintiff-friendly strict liability standard. 

The justification for this change was simple: The prior half-century had witnessed the rise of the large-scale corporation. Prior liability frameworks had assumed relative parity between plaintiff and defendant, but the large corporation outmatched the individual citizen in nearly every way. Most importantly, it could write contracts with broad liability disclaimers, and the individual customer would have almost no negotiating leverage to get themselves a better deal. 

Administrative regulation too often functions like a one-way ratchet; common law uniquely is suited to adjustments up or down.

Reformers hoped that a new liability system would enable greater accountability for these large firms without creating new regulatory regimes for every product. Their objective was to employ the adaptability of the common law to force firms to internalize the costs of harms (negative externalities) caused by their products without forcing them to succumb to endless technocratic regulations. 

Much has been written about the legacy of these reforms. One effect that does seem clear is that they contributed to a near-collapse of many segments of the liability insurance industry. This indicates that the reforms broadened the scope of corporate liability so profoundly that the risks were no longer possible for insurers to underwrite. Liability was, and is, meant to serve as a “market-based” approach to regulation, but if it breaks the very markets it is supposed to enable, it has very likely gone too far. 

Over time, the system softened again, owing to a combination of legislative reforms, new judicial doctrine, and revisions in the 1990s to the American Law Institute’s Restatement of Torts, a privately produced but widely used set of guidelines for common law judges. Whatever one thinks of the liability reforms of the mid-20th century, it is worth noting that most regulatory regimes created entirely by statute do not tend to reform themselves in this way. Administrative regulation too often functions like a one-way ratchet; common law uniquely is suited to adjustments up or down. This, too, is a point in favor of tort. 

Common law in the age of AI

Common law liability today is an imperfect system. Even under the best of circumstances, cases take years to resolve — far slower than the pace of AI development. And because it is adjudicated primarily at the state level, significant differences in doctrine can develop over time between states, so concerns about a state-by-state patchwork of AI statutes apply to tort liability as well. 

While it is unlikely that AI developers face no common law liability exposure, the exact extent and contours of that exposure are deeply unclear. Common law is also frequently ill-suited to mitigating catastrophic tail risks, such as pandemics — the liabilities are so large in these cases that the firm in question would likely be bankrupt no matter what, so the dynamic incentive created by the common law is attenuated. To the extent one’s primary concerns with AI relate to such risks, the appeal of common law may be lower. 

The long-term viability of tort liability as a comprehensive governance framework for AI, then, is far from certain. As a near-to-medium-term solution, however, common law seems well-suited.

We should see common law as a dynamic legal tool for  both regulating AI and incentivizing its adoption.

We do not know the nature or size of the risks posed by AI. We have many ideas in our heads, and those ideas vary greatly from person to person. Given this, it would probably be unwise for us to codify too many of our concerns into statute before we really understand what kind of legal response AI merits. 

Common law gives us, at the very least, a reactive yet dynamic legal tool to deal with the realized, rather than speculative, harms of AI. In grappling with those cases, we are likely to develop valuable societal skills in reasoning through the complex questions of who is responsible for AI-related harms. After all, the venue for many of America’s “societal conversations” is, for better and for worse, the courtroom. 

The history of American common law, however, should teach us that advocates of technological progress need not merely play defense. We need not see common law as a better regulatory tool than all the others. We can also conceive of it as a legal tool to incentivize adoption of AI. 

One of the primary objectives of AI development, writ large, is to automate processes at a better- or safer-than-human level. We don’t want self-driving cars to match human rates of accidents; we want them to cause far fewer accidents. We don’t want AI-enabled medical diagnoses to have the same accuracy rate as human doctors; we want vastly superior and faster diagnoses. We want AI to give us free lunch after free lunch: faster, better, safer, cheaper, and more reliable.

And what better standard to do this than “reasonable care”? Is a doctor who fails to use diagnostic tools with proven better-than-human accuracy really giving his patients the care he owes them? Should human expertise be the ceiling for reasonable care, or should we allow AI’s growing capabilities to raise that ancient legal standard?

The history of the common law suggests we should encourage legal progress just as much as technological progress. Whether we encourage either, as ever, is up to us. 

Sign up for Big Think on Substack

The most surprising and impactful new stories delivered to your inbox every week, for free.