Settings

Theme

License to not drive: Google’s autonomous car testing center

medium.com

70 points by davidcgl 10 years ago · 22 comments

Reader

timothya 10 years ago

> For some time, Google has been convinced that the semiautonomous systems that others champion (which include various features like collision prevention, self-parking, and lane control on highways) are actually more dangerous than the so-called Level Four degree of control, where the car needs no human intervention. The company is convinced that with cars that almost but don’t drive themselves, humans will be lulled into devoting attention elsewhere and unable to take quick control in an emergency.

I think this is a really good perspective. Considering how often drivers are already doing things like using smartphones behind the wheel of non-self-driving cars, I think that sort of activity is only magnified by partial autonomy - which is very dangerous! Humans get distracted or bored easily, especially when completing routine tasks. I'm glad that Google is choosing to build a car that never needs human intervention rather than rushing to market with a partial solution.

Here's a video where you can see what distracted teen drivers look like. Terrifying. http://youtu.be/SDWmwxQ_NnY

  • ghaff 10 years ago

    On the one hand, they're of course correct. As with automation generally (whether cars, airplanes, or software deployment), once you get to a certain level of automation, you pretty much have to be all in because humans can't act quickly enough or with enough throughput.

    On the other hand, it's easy to see why auto manufacturers and others are disinterested in an all-or-nothing goal that is likely to be decades away. Because they want incremental features they can sell in the interim.

    Of course, their challenge is around what incremental approaches work given that humans will not pay attention once you reach a certain level of automation. Perhaps you enable full automation only under scenarios where it works reliably--say freeways in certain weather conditions--and is legally allowed under those circumstances. (Though I suspect the first step is that people will use "autopilots" and go ahead and play with their phones--even though they're not supposed to--given that many already do that today.)

  • rconti 10 years ago

    I concur. Here's a comment I made on another thread about self-driving cars, and my experience with even rudimentary assistance features on my 2016 VW (coming from a car with no such features whatsoever):

    https://news.ycombinator.com/item?id=10735434

  • ZeroGravitas 10 years ago

    I have some sympathy with this view, however I don't see how self-parking and collision avoidance fit in? These seem like ideal places for computer drivers to show their value without affecting most driving experiences.

  • oska 10 years ago

    I've seen discussion of this issue in the domain of pilots with commercial airlines. The suggestion was made that because so much of flying is now done by autopilot, pilots' ability to react quickly and appropriately in a real emergency when control is handed back to them has significantly declined. And that we may soon go to completely pilotless airliners which are taken over by ground control in case of emergency. (This would also have the side-benefit of significantly reducing the risk of hijacking).

    • ghaff 10 years ago

      I doubt it. Pilots are a pretty trivial cost in airline operations and there are a lot of reasons to have a human who is definitively in charge on the aircraft.

    • cbhl 10 years ago

      How does this reduce the risk of hijacking? An attacker would just hijack ground control instead.

      • jessriedel 10 years ago

        Ground control is much easier to secure. Instead of having to find a needle (hijacker) in a haystack (the millions of random Americans flying each day) with a 90 second search, you can do proper background checks on the small number of people who are allowed to be there.

        • CM30 10 years ago

          Until one of those people turns out to be malicious. There is no way to tell what anyone is thinking, or whether they've been spending their time outside of work being slowly corrupted by certain influences.

          • jessriedel 10 years ago

            And yet the rate of terrorist hijackings, although tremendously small, is much larger than the rate at which secret service agents betray the president.

ksenzee 10 years ago

Google is betting on a system that depends really heavily on detailed mapping. I'd love to know their plan for determining when the map is out of date, because of road construction or whatever. That seems like the hardest part of the whole thing to maintain long-term.

  • ghaff 10 years ago

    AFAIK, pretty much everyone is. That's really been the big shift over the past few years that's allowed for fairly impressive autopilot levels without AIs having to "understand" and parse the world to nearly the degree people do. Presumably vehicles connected to the system will be able to contribute to updates but maintaining current, high resolution maps across a wide area will certainly be a challenge. It's not unreasonable to think that the government could play some role as well as it does for marine navigation.

  • empath75 10 years ago

    I imagine if the car ran into something that significantly deviated from what it expected, it would drop out of automatic driving. But google maps is pretty remarkably good at keeping up to date.

  • elchief 10 years ago

    I have a feeling they track you with your Android, and notice when no one uses a particular road, and can check if it's under construction.

imh 10 years ago

In light of all the IoT bugs we've been hearing about, it's really nice to hear that they are being super cautious about development here. I hope they keep the same level of caution (or increase it) as they get close to market. My biggest worry here is that as the different car companies get close to market on SDCs, there will be more pressure on each to hurry.

  • thrownaway2424 10 years ago

    My biggest worry is that Tesla's half-baked almost-self-driving-but-not-really will hurt someone and cause reactionary anti-self-driving-car laws.

    • Animats 10 years ago

      Tesla is worrisome, but they backed off some on the automation. Everybody else who has Tesla-level autodrive (NHTSB level 2, or lane keeping plus radar cruise control) has sensors to make sure the driver has hands on the wheel, or is at least in the seat and looking forward. Tesla didn't put that in. Hence those scary Youtube videos.[1]

      Cruise (YC 14) is just scary. They still have that advertising video online [2] that totally oversells what they can do. All they have is lane-keeping and smart cruise control, like the other entry level systems. It's automatic driving from the "move fast and break things" crowd; they're from web and app startups.

      Google is being cautious and testing heavily. But they're spending enough money to test fast, with many cars on the test track. That's the auto industry way of doing things. It takes money, but not decades, to get it right.

      The CEO of Volvo has the liability issue right - when in autodrive, the manufacturer is responsible. If you can't accept that, you shouldn't be doing this.

      [1] http://www.cnet.com/roadshow/news/tesla-autopilot-fail-video... [2] http://www.getcruise.com/

      • ghaff 10 years ago

        It's hard to come up with examples of consumer tech/products where it's considered just part of the way things are to have an event resulting in serious injury or death even though a properly maintained product was used as directed and there wasn't a clear external factor (e.g. brakes don't work on ice). I suppose some failures due to age. Drug side effects to a degree (but see Vaccine compensation fund). However, in general, such things routinely result in lawsuits in any case.

    • imh 10 years ago

      I'd almost typed that same sentence in my original comment, but then I reconsidered. It turns out it's not reactionary laws I'm worried about, but the people being hurt. Even worse than over regulation is a marketplace of car manufacturers where the experience is convenient enough that some (relatively) high level of danger is accepted by the consumers.

samstave 10 years ago

I asked this before: If they need to log many hours of driving and various conditions - why cant they hook the brain of the car up to playing videogames like GTAV and EU truck simulator etc... and have it play thousands and thousands of hours of the game without killing any pedestrians or getting any tickets

  • TulliusCicero 10 years ago

    Something tells me you didn't read the article:

    > At the end of the shift, the entire log is sent off to an independent triage team, which runs simulations to see what would have happened had the car continued autonomously. In fact, even though Google’s cars have autonomously driven more than 1.3 million miles—routinely logging 10,000 to 15,000 more every week — they have been tested many times more in software, where it’s possible to model 3 million miles of driving in a single day.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection