
So much recent news about Waymo:
Waymo robotaxis have been caught driving by stopped Austin school buses 20 times -- including a continuing problem after two successive Waymo software updates were supposed to fix it. Waymo refused to stand down operations during school bus hours when requested to do so by the school district. They’re doing yet another software update to fix the problem. This time for sure!
This video of the infractions in Austin shows some interesting (and very concerning) behavior. In some incidents the Waymo clearly hesitates, then decides to go past anyway. In some scenarios it just blows past as if the school bus is nothing special. This is not a matter of subtle situational ambiguity.
The reason they gave (per the referenced article) for not standing down problematic operations amounts to a position that Waymo has the right to unilaterally decide what risks it takes with the safety of children. But apparently if you make enough of a stink they will give fixing it another try.
Another schoolbus incident was reported in Atlanta earlier that prompted the first software update, so this is not solely an Austin issue, and Waymo was already on notice this was a potential risk area.
It should not take a public outcry and so many documented incidents to get a robotaxi to stop for school buses. Waymo hasn’t hit a school kid yet, but getting lucky is not the same as being safe. Their luck will run out if they don’t get a handle on this.
Driving through a tense police response scene, including driving past a suspect prone on the roadway.
Hitting a dog (not their first) in San Francisco, relatively soon after the high-profile killing a popular bodega cat.
Edit: breaking news on the death of Kit Kat: video shows a person very close to the robotaxi trying to lure the cat out from under the vehicle while it pulls away. In the words of that person: “A human driver, she believes, would have stopped and asked if everything was OK after seeing a concerned person kneeling in front of their car and peering underneath.” The Waymo official description of the events seems to have left out important context for the mishap.
An intentional shift by Waymo to drive more assertively, with The Wall Street Journal saying “Waymo’s Self-Driving Cars Are Suddenly Behaving Like New York Cabbies: Autonomous vehicles are adopting humanlike qualities, making illegal U-turns and flooring it the second the light goes green” (This is a complex topic, especially with regard to accountability if breaking road rules contributes to a mishap. It is unclear how this change will affect their mishap rates.)
A recent start to driving on freeways, which have significantly different safety tradeoffs than urban centers due to higher speeds. (It is unclear how this change will affect their mishap rates.)
A new narrative that somehow there is a public health imperative to adopt this technology. But the argument hinges on non-existent proof that the technology is already saving lives, and does not consider that there are other proven and less investment-intensive ways to reduce traffic fatalities. The jury is still out on Waymo fatality rates, and will be for quite some time.
Endless press and social media repetitions that Waymo has zero fatalities in 100 million miles, conveniently leaving out the at-fault qualifier on that number. A more objective evaluation is more like 0.47 pro-rata fatalities. Waymo’s position amounts to only serious crashes where they admit majority blame should count.
Announcements of new deployment cities and a big expansion push beyond that.
Continual symptoms of serious problems with ineffective remote assistance. In essence, every failure that makes it to the news is a remote assistance failure one way or another: a failure to request assistance when needed, an inability to continue safe operations while waiting for assistance, or incorrect assistance actions — all three of which we have seen from Waymo in the past. For a company that touts its transparency, Waymo’s silence on this topic and the take-down of a podcast on their remote assistance practices speak volumes.
All of this is well within what we should have expected. The reality is nothing about Waymo’s scale-up trajectory has changed in the last month, except perhaps Waymo getting more serious about primping for an IPO or an additional investment round.
Waymo will continue to expand, and they are entirely in control of the calculus of how much risk they subject other road users to as they do so. Waymo will continue to face new challenges as they scale up, because Waymo is nearer the beginning than they are to the end of their robotaxi scaling journey.
Every time a Waymo robotaxi does something dangerous they will in essence claim that we should overlook that because no harm was done, or any harm wasn’t their fault, or someone else initiated the mishap, and anyway they are always improving so that one doesn’t count, or never mind the mess because they are Saving Lives! And so on. The narrative vacillates between they have proven they don’t make the stupid mistakes that humans make, flipping over when advantageous to instead saying humans make that same mistake.
In Waymo’s words: “Safety is our highest priority at Waymo, both for people who choose to ride with us and with whom we share the streets. When we encounter unusual events like this one, we learn from them as we continue improving road safety and operating in dynamic cities.”
But we are left to wonder why the “world’s most experienced driver” still struggles to handle obviously foreseeable circumstances after 100 million miles of experience. In one case caught violating a critical road safety rule in the same general scenario at least 20 times spanning at least two bug fix updates, and counting.
Many like Waymo’s robotaxi service, and on the good days by all accounts it is quite good. But safety is all about what happens on the bad days, even if they are rare. Moreover, Waymo is doubling, tripling, and quadrupling down on net statistical safety in response to the bad days, which I think is exactly the wrong strategy. They should instead be building trust via transparency instead of shrugging off problems.
Waymo has every incentive to continue their expansion juggernaut using this same playbook until they get their IPO or they suffer a mishap they can’t shrug off. It’s unclear which will happen first.
Phil Koopman has been working on self-driving car safety for almost 30 years, and embedded systems for even longer than that. He is the author of a new book: Embodied AI Safety.
