Settings

Theme

Fully driverless cars could be months away

arstechnica.com

32 points by tnash 8 years ago · 15 comments

Reader

Zanni 8 years ago

I've always been a big proponent of self-driving cars, but only because I imagined them as fully autonomous. A fleet of self-driving cars that rely on being backed by, essentially, a call center, sounds like a nightmare.

"When Waymo tested in Phoenix earlier this year, drivers sometimes had to take over the wheel to prevent the cars from holding up traffic because it took too long for humans in the command center to answer the cars’ requests for help."

  • Spivak 8 years ago

    I guess the ability for a single person able to essentially 'drive' an entire fleet of cars is an improvement but it's not exactly the utopia I imagined.

vannevar 8 years ago

Fully driverless cars are here today, for that matter. But if you want want tamperproof driverless cars that perform well under adverse conditions and are thoroughly tested, then you're going to have to wait a lot longer than "months".

I think this snippet is pretty telling: "Waymo chose the Phoenix area for its favorable weather, its wide, well-maintained streets, and the relative lack of pedestrians." (Emphasis mine.) Probably wise, but I'm sure they've already carefully calculated the risk/return vs pedestrian fatalities and are coming out ahead.

  • shallot_router 8 years ago

    How could a driverless car ever truly be tamperproof, though? At some point we'll just have to accept it as a potential risk.

    • vannevar 8 years ago

      Good question. Tamperproof to me means they can't physically be spooked into doing things, as well as being secure from hacking. In theory, they just have to be as safe as human drivers, who manage to kill thousands of people each year. But as a practical matter, people will probably demand that they be much safer than humans. And it will take awhile to prove that out.

Someone 8 years ago

…if ‘fully’ means ‘remotely operated, when the going gets tough’, such as, I guess, in bad weather (rare in Phoenix), or when there is a lot of traffic in one place, times when demand for taxis is highest.

ocdtrekkie 8 years ago

This is something you should never hear about a product which is life critical that it works correctly:

"Efrati reports that Waymo CEO John Krafcik faces pressure from his boss, Google co-founder and Alphabet CEO Larry Page, to transform Waymo's impressive self-driving technology into a shipping product."

Combined with releasing that product in the area with the least consumer and safety protections:

"Another important factor was the legal climate. Arizona has some of the nation's most permissive laws regarding self-driving vehicles."

So the TL;DR is that someone has been pressured to rush a product to release in an area with few safety regulations that could cause a lot of harm if it malfunctions.

  • nieksand 8 years ago

    On the flip side, the status quo is killing 40000 people per year in the US alone:

    http://www.nsc.org/NewsDocuments/2017/12-month-estimates.pdf

    • nealabq 8 years ago

      Which invites the question: how long before human-driven cars are no longer allowed on the public roads?

      I'm guessing year 2035.

    • na85 8 years ago

      So we should push out half-baked solutions?

      • nieksand 8 years ago

        Depends on how half-baked.

        From a PR perspective it's a definite "no". Even a single self-driving fatality would lead to global news headlines and a lawyer feeding frenzy.

        But from a protecting people perspective? If making it "live" quicker would speed development and adoption... then maybe.

        E.g. the first 5 year increase fatalities net by 1.1x. After that tech is actually dialed in and fatalities drop to 0.5x. Holding off gives you 400K fatalities whereas pushing earlier adoption gives you 320K. That would be 80K lives saved. (Under the assumptions of this shoot-from-the-hip model).

        • Spivak 8 years ago

          > Even a single self-driving fatality

          It's more specific than that. It will be the first self-driving crash that would be glaringly obvious to a human driver. Like the Tesla crash where the sensors mistook a white semi for clouds and merrily plowed though it.

          If the tech is adopted too soon it might face a death knell when computer assisted driving proves to be much safer than fully autonomous. It's much easier to fill a human's blind spots than replace the driver entirely.

        • emiliobumachar 8 years ago

          Another plausible hypothesis is that the tech is already better-than-humans today, but not orders-of-magnitude better. Killing 30 thousand people a year instead of 40 thousand is not politically viable. So we just keep it in the lab while 10 thousand extra die because of politics.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection