The railway and the balloon

5 min read Original article ↗

Is AI more like a train or a hot air balloon? Veronica Paternolli and Ryan Calo asked at this year’s We Robot. Nineteenth-century hot air balloons were notoriously uncontrollable. The US legal system assigned strict liability: it was your fault if your balloon crashed in someone’s backyard, even if you did everything you could to prevent it. Railways were far more disruptive but also far more predictable, and therefore were liable only in cases of negligence. Which regime should apply to AI is an ongoing debate.

Calo and Patornolli also wondered if agentic AI could reverse 30 years of being forced to take on busy work companies formerly did for us. This “shadow work” encompasses everything from retrieving bank statements and completing reCaptchas to pumping our own gas. Actually, more than 30 years: in 1962, Agatha Christie’s Miss Marple complained that self-service supermarkets were replacing shopkeepers who served you. If companies want a negligence regime, Patornelli and Calo argue, they should deploy agentic AI to relieve us of the “sludge” instead of displacing jobs and aggregating wealth.

But would we believe them? Technologists have promised before that their products will up-end the balance of power and simplify our lives – some of them the same people and companies. The web browsers and search engines that promised a universe of information today are faithless agents serving their owners and developers. Why should agentic AI – if it’s ever trustworthy – be any different? Many of us want a life with less demanding devices – and agentic AI sounds like even more “relationship” work.

Underlying Patornolli’s and Calo’s argument, however, is a fundamental clash. Like Mireille Hildebrandt at a 2017 Royal Society meeting, they argue that law is purposely flexible so it can adapt to unforeseen circumstances and, even more important, contestable (otherwise, Hildbrandt said, it’s just administration). Computers, even dressed in “AI”, always have hard boundaries underneath. As Bill Smart explained here in 2016, no matter how “fuzzy” its logic, no computer can evaluate standards like the “reasonable woman“. No matter how “fuzzy” its logic, a computer will issue a ticket if you are going even just the tiniest fraction of a nanometer faster than the speed limit. Anti-doping authorities have a similar problem as Neil Robinson said in a recent episode of the Anti-Doping podcast: the extreme sensitivity of modern tests is catching people with no intent to dope.

Liability wasn’t the immediate problem in Tomomi Ota’s description of everyday life with a Pepper robot at home (YouTube), which she took shopping, to restaurants, and on public transit as part of the Robot Friendly project, An account that led AJung Moon to wonder if a future filled with robots is really desirable. The inevitability narrative would say we’re going to get it anyway, begging the questions of whether we have a) the resources to make billions of robots and b) where we would put them all.

Sometimes these things fail in the simplest ways: a close-up of a Pepper that has been used as a greeter shows broken fingers because it was not robust enough for the basic social protocol of shaking hands. In studying the integration of robots into customer service situations, Elsa Concas, Stefan Larsson, and Laetitia Tanqueray found staff consultation is essential. In a staged setting such as the Japanese “ramen and robots” Pepper Parlour, the robots were a draw for customers and appreciated by the staff, who were paid more. In an unstaged airport tourist information center, they were basically useless and ignored. A commenter noted the same is often true of the robots intended for elder care in Japan: most end up in a cupboard,

This theme was also picked up by Emily LaRosa, who studied the limits of explainability in automated apple picking. In this case of “epistemic injustice”, the neglect of local knowledge and ecological tradition led her to propose a “Curated Information Framework”. She concluded that trust in AI systems is not created by transparency on its own if that means handing over large amounts of inscrutable data, but by taking lived context into account – “situated transparency”.

LaRosa’s study echoed the paper Ota co-wrote with Rikiya Yamamoto, which derives new “laws of robotics” to update Isaac Asimov’s Three Laws, which can’t be programmed and whose fixed, “top-down” nature was what he needed in a story-telling device. The real world, they argue, requires principles built bottom-up from practical experience. Their selection: mutual respect, social membership, and co-evolution.

They have lots of competition. Moon counts more than 100 sets of principles and ethical frameworks published since 2018, many of which she says make assumptions debunked in the 2025 paper The Future is Rosie?” or as Paul Ohm and David Atkinson discussed, encoded in the benchmarks – documents used to define AIs’ behavior and priorities. This “latent rulebook”, they said, is increasingly secret.

Meanwhile, like explainability, the right to repair fails for AI, which changes constantly with software updates, networking, and interacting. Ryota Akasaka argued that current legal approaches don’t work for products that aren’t fixed and will lose everything they’ve accrued when “repaired” to their original state. When Ota was offered the opportunity to upgrade her development model Pepper, she declined in shock. Replacing your robot’s head, it seems, ends a beautiful friendship.

Illustrations: “Hidden Labour of Internet Browsing”, by Anne Fehres and Luke Conroy. Via A14 Media (CC-by-4.0).

Wendy M. Grossman is an award-winning journalist. Her Web site has an extensive archive of her books, articles, and music, and an archive of earlier columns in this series. She is a contributing editor for the Plutopia News Network podcast. Follow on Mastodon or Bluesky.