Settings

Theme

Video suggests huge problems with Uber’s driverless car program

arstechnica.com

211 points by yohui 8 years ago · 243 comments

Reader

phyller 8 years ago

"Uber recently placed an order for 24,000 Volvo cars that will be modified for driverless operation. Delivery is scheduled to start next year."

Wow. The other driverless car players should be all over this lobbying to shut Uber down. If Uber massively deploys a commercial service with subpar quality in order to "win", and then those cars start getting into accidents, the entire field is going to be delayed by 10 years. The general public is not going to just think "Uber is bad", they are going to think "self-driving cars are bad". Politicians will jump all over it and we'll see very restrictive laws that no one will have the guts to replace for a long time.

And honestly if that happens, that's probably what we would need anyway. If the industry doesn't want to be handcuffed they need to figure out some really good standardized regulations on sharing data with law enforcement, how to determine fault for self-driving vehicles, and what penalties there should be. That are fair and strict.

  • selestify 8 years ago

    Could Uber be doing this on purpose? They're way behind on self-driving tech, so maybe if they can't have it, no one can.

    • phyller 8 years ago

      The more I think about this the more it makes sense in a horrible way. The important thing for Uber is that they are first to market. If they are first to market there are two outcomes: a) they are successful, they make more money with lower fares because they don't need to pay drivers anymore. They basically take over the market that they are already dominating, beating competitors into bankruptcy. The market explodes as it is cheaper to get on demand cars than to own your own. $$$$$ b) they are not successful. They kill a hundred people in a month or two. The self-driving car industry gets shut down, and for the cost of a few hundred million dollars in settlements they keep their current market dominance in the current industry, but have to keep paying drivers. $$

      The alternative, they are not first to market, someone else is and immediately replaces Uber with a network of cheaper self-driving cars: a) Uber goes out of business b) They can somehow convince someone to license the tech or sell them the cars at a reasonable price, making them vulnerable and less profitable, with no market advantage.

    • phyller 8 years ago

      Interesting thought. They do see it as an existential threat to their company, and if any company would do something like that it would be them. But that would be pretty low, basically purposefully killing people in order to protect their business model.

pdkl95 8 years ago

> We don't need redundant brakes & steering or a fancy new car; we need better software," wrote engineer Anthony Levandowski

Any engineer with this attitude needs to learn the lesson of the Therac-25. The issues in the Ars article are very similar to section 4 "Causal Factors" of the report[1].

> To get to that better software faster we should deploy the first 1000 cars asap.

Is that admitting that they do not have the "better software" and intend to deploy 1000 cars using "lesser software"? That's treading dangerously close to potential manslaughter charges if prove this willful contempt for safety to a court.

[1] http://sunnyday.mit.edu/papers/therac.pdf

  • phlo 8 years ago

    > willful contempt for safety

    To play Kalanick's adversary, he might be arguing for more real-world data collection. Tesla famously equipped most of their cars with more sensors than were required at the time of delivery, using the data to drive development of the Autopilot function that was later added to the cars.

    • PinguTS 8 years ago

      Please, remember that the original technology of the so-called Autopilot was bought from MobilEye. Please remember, that MobilEye is a long-time player in this area and have deployed their technology first with BMW and later with Volvo way before it was used in Tesla. Tesla only activated features that were not ready for primetime. That was the reason why MobilEye backed off from the partnership with Tesla.

      Btw. all this of collecting sensor data and improving on this, is an original idea of MobilEye that they presented at conferences before the so-called Autopilot was available from Tesla.

  • PinkMilkshake 8 years ago

    Agile methodology applied to safety-critical systems engineering, god help us.

    • pjc50 8 years ago

      "Move fast and break things. Also break people, but if they're homeless or ride bicycles nobody cares"

      • cmsj 8 years ago

        To quote a friend... "disrupting traditional life expectancy models", or to quote Webshit Weekly... "A/B testing vehicular manslaughter"

  • IshKebab 8 years ago

    > We don't need redundant brakes & steering or a fancy new car; we need better software," wrote engineer Anthony Levandowski

    He is clearly right about that. Human-driven cars are safety critical and already do fine without redundant brakes and steering. How many crashes are due to brake or steering failure? I'm guessing it's well under 10%.

    Most human crashes are due to bad driving, and for computers it will be the same. I mean, even this fatal crash probably could have been prevented with better software. It's not like the brakes failed. They just weren't applied.

    > To get to that better software faster we should deploy the first 1000 cars asap.

    This is where he is totally mad.

    • davidgould 8 years ago

      Human driven cars have redundant brakes. There is one brake assembly for each wheel. The hydraulic system is split into two separate circuits to guard against leaks. Additionally, cars are fitted with an "Emergency brake".

  • iClaudiusX 8 years ago

    I get the feeling based on comments here that there is a severe lack of ethical and critical thinking among engineers and developers. I recognize that this is only a vocal minority but the constant mantra of "move fast and break things", where getting rich at any cost is seen as a virtue, has made me extremely disillusioned with this brand of startup culture. Doubly so when people are trading stock tips on how to profit from tragedy by supporting the worst actors in the field.

    • JKCalhoun 8 years ago

      I'm not sure the "self-driving car industry" can survive these possible (inevitable?) future scenarios:

      • Self-driving car kills child

      • Hackers send self-driving car on wild ride

      • Empty self-driving semi found in warehouse lot, GPS pirates make off with entire cargo

    • archagon 8 years ago

      The dark, unspoken sentiment behind such comments is: "the end justifies the means". These programmers treat lives as currency for their vision of the future.

    • KKKKkkkk1 8 years ago

      "Move fast and break things" is the motto of Facebook. It means that Facebook engineers are encouraged to make user-facing changes without red tape. If you are claiming that some self-driving car company has "move fast and break things" as their motto, then you are being willfully deceptive.

      • PhasmaFelis 8 years ago

        Have you not noticed that "move fast and break things" is essentially how the entire startup space functions? When a social media company does it, it's merely annoying. Do it with dangerous machines, and people die.

  • wdr1 8 years ago

    Is it better for self driving cars to have a flawless record & low adoption, or to have a 100x improvement over human drivers & broad adoption?

    Creating the perfect self-driving car, with redundant systems, safety everything & so on will certainly help its safety records.

    But it will also drive up the cost.

    And put it out of reach for a lot of people.

    If the goal is to save lives, the bar self-driving cars should be held to is what humans do driving today, not perfection.

    • discodave 8 years ago

      From the article:

      > But zooming out from the specifics of Herzberg's crash, the more fundamental point is this: conventional car crashes killed 37,461 in the United States in 2016, which works out to 1.18 deaths per 100 million miles driven. Uber announced that it had driven 2 million miles by December 2017 and is probably up to around 3 million miles today. If you do the math, that means that Uber's cars have killed people at roughly 25 times the rate of a typical human-driven car in the United States.

      I don't think there's enough evidence to say that self-driving cars are as safe as humans.

      • jon_richards 8 years ago

        To be fair, you have to keep in mind that deploying 1000 cars would quickly make self-driving cars safer than humans. Yes, you lose some life in the interim, but you accelerate the creation and adoption of something that saves much more life in the long run.

        But that is human experimentation, something we as a culture generally agree is abhorrent.

        • AlexandrB 8 years ago

          > To be fair, you have to keep in mind that deploying 1000 cars would quickly make self-driving cars safer than humans.

          This seems unreasonably optimistic.

          First, this particular crash is an egregious counter-example. The car doesn't even seem to slow down when it first sees the pedestrian's foot. Nor does it try to swerve. This is basic stuff for a human driver, never mind more complex avoidance and risk mitigation a human driver can perform.

          Second, we've had years of training various AI content curation algorithms on social networks, videos, blogs, etc - and the most advanced AI and search company in the world still can't keep adult-oriented conspiracy videos off of Youtube Kids. And while you might counter that content is a human problem, driving is too! Dangerous driving situations happen at the periphery of traffic rules, where someone is doing something the drivers around him don't expect. I've seen people run solid red lights, drive the wrong way down a one-way street, pedestrians start crossing when their light turns red, etc. Short of creating special roads for autonomous cars only - how does an autonomous car deal with all of this successfully?

          • bonesss 8 years ago

            > the most advanced AI and search company in the world still can't keep adult-oriented conspiracy videos off of Youtube Kids

            In fairness here: Google could do this so easily by throwing people at the problem. A whitelisted set of content producers could do this, with people dedicated to whitelist curation. Google can keep them off, and quite readily. Keeping that off automatically and cheaply is the issue...

            On that front: automated content recognition that can parse the nuances between regular claymation Elsa and spooky claymation Elsa who talks about jamming things in her "happy spot" is AI-hard. These are not random videos, they are content tailor made and adapted to pass whatever filters are in place.

            For data scientists there is a massive gap between working with concrete sensor data that can be managed reasonably, and undefined philosophical/moral/sexual boundaries in dirty noisy counter-programmed video content... One replaces our eyes and ears and reflexes, the other seeks to replace our fabulous brains.

          • culturestate 8 years ago

            > First, this particular crash is an egregious counter-example. The car doesn't even seem to slow down when it first sees the pedestrian's foot. Nor does it try to swerve. This is basic stuff for a human driver, never mind more complex avoidance and risk mitigation a human driver can perform.

            I'm not sure which side of this I fall on just yet, but something strikes me here: you're assuming a human driver with no impairment (e.g. eyesight or fatigue) paying complete attention to what they're doing. We know that this isn's always the case (and could even be a minority!), so this doesn't seem to be a great argument.

            • sunir 8 years ago

              You shouldn’t be driving so fast that stopping distance is longer than visibility.

              It is irrelevant what average humans do. That is the current rule set for all drivers.

        • blt 8 years ago

          No. Machine learning is not magic. Machine learning has nothing to say when the objective function is not well understood.

        • fourthark 8 years ago

          This is assuming that the learning actually happens.

    • gambiting 8 years ago

      Absolutely not. The cars should launch perfect, not just as good as humans. If you have any other human operated machine and replace it with an automatic one that sometimes kill humans, does it matter if it does it less often than the human operated one? Absolutely not - a machine cannot have an operating mode where death is possible. Look at Therac radiotherapy machines - they surely saved hundreds if not thousands of lives from cancer - but they had an operating mode that would kill the patient. Does it matter than a manually operated radiotherapy machine would most likely kill more people due to operator errors? Again - absolutely not.

JohnJamesRambo 8 years ago

"Uber announced that it had driven 2 million miles by December 2017 and is probably up to around 3 million miles today. If you do the math, that means that Uber's cars have killed people at roughly 25 times the rate of a typical human-driven car in the United States."

Wow there goes that "safer than human drivers" argument.

  • mabbo 8 years ago

    With one data point, you can't extrapolate much. This is misuse of statistics.

    Consider if there was a new lottery and you weren't sure what the odds of winning were. You play it three weeks in a row and the third time you win a million dollars. Conveniently, no one else tries the new lottery yet.

    Does it follow then that the odds of winning a million dollars are 1 in 3? Or should you play it a few more times before you declare to all that one in three plays will make one a millionaire?

    • slavik81 8 years ago

      One accident is clearly not one data point. If Uber had driven a billion miles with 0 accidents, we would safely conclude they were safer than human drivers with "0 data points".

      Assuming that accidents are independent, we can model this as a Poisson point process. If the accident rate is 1 per 100M miles and Uber has driven 3M miles, the probability of there being zero accidents in that time is P{n=0} = ((λt)^n / n!) * e^(-λt), where λ=1/100M and t=3M. Doing the math, it seems that's 97.04%.

      So, yes. It is possible that Uber's accident rate is 1 in 100 million. If so, this incident would fall in that remaining 3%. It's unlikely, but possible.

      • Fricken 8 years ago

        Gil Pratt, who heads up Toyota's autonomous development initiative mentioned that we would need to drive 8.5 billion autonomous miles to be able to declare with 99% statistical certainty that autonomous vehicles are safer that human driven ones. Of course, as we are witnessing, great pains will be taken with every preventable injurious or fatal collision to ensure that sort of failure never happens again, so by the time we get to the 8 billionth mile the software and hardware will have improved considerably, rendering the early data moot.

        One thing we can say about the woman killed the other day by an autonomous Uber, is that unlike the other ~40,000 killed on America's roads over the past year, her's was not in vain.

        Every day that we delay the widespread deployment of this technology, it's another 100 or so people dead. Of course, the public is unlikely to see it that way. They see one death as a tragedy, but 40,000 is just a statistic, business as usual, nothing to get excited about.

        • tmd83 8 years ago

          I don't really understand a whole of similar comments. It's as if Uber and all other autonomous driving pursuer is doing so for the betterment of life for everyone and so their laziness in making sure their tech is good enough can be excused. The car in question failed spectacularly and the response is her death was not in vain?

          Uber is doing this for money, so is all the other companies even if there are some potential huge collateral benefit for the human race from that. It's definitely not the goals of the companies regardless of any PR talk. So when you gamble with people's life for money and fame you should go to jail for a long time executives or engineers alike.

          The statistics are just being used to sustain corporate greed in my mind and we should not let them. Self driving cars has lots of potential to save life, so are other techs. Does't mean that all sense of responsibility and ethics goes away just because of the potential.

          • Fricken 8 years ago

            The pharmaceutical industry tries to save lives, for money. They're driven by greed. They also have a long track record of fucking up big time and people dying because of it. Does this mean we should stop giving people medicine? Would letting people get sick and die be preferable than allowing imperfect industries with a profit motive try and save them?

            • Slartie 8 years ago

              What is currenly being done with self-driving car testing on public roads is basically like a pharmaceutical company mixing a new experimental drug into the dishes of random people at a restaurant, which neither are compensated in any way, nor did they have a chance to decline their involvement in the test.

              Such a thing would be entirely unthinkable in the pharmaceutical industry of today. So if this comparison suggests anything, it is to much more strictly regulate self-driving car development and testing!

            • stevesimmons 8 years ago

              That's a strawman argument... These driverless car tests are more like early phase clinical trials. Clinical trials are conducted under very tight controls, using participants who have given informed consent. Drugs don't get licensed until they can prove efficacy and safety and are approved by the FDA.

              The current procedures for clinical trials are the result of decades of experience, where mistakes (and yes, occasionally shortcuts driven by greed) did result in avoidable deaths of trial participants.

              For an example of how the pharma industry deals with this 'safety first' versus 'stifling innovation' dilemma, read this article: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1526936/

              The inevitable result of this crash is driverless vehicle testing will get regulated more like drug trials...

            • zach43 8 years ago

              That’s not really an apples-to-apples comparison. In the case of the pharmaceutical company, people don’t really have any other choice other than to take experimental medicine for terminal stage diseases.

              On the other hand, there are widely used, cheap, and efficient alternative to self-driving cars on the roads today: human drivers, public transport, carpools, etc.

              If you want to make analogies, I think the self-driving car accident is more like if an elevator company accidentally crashed an experimental high-speed elevator in a shopping mall. I think it would be grossly negligent on the part of the company to test such an unproven device on the general public, especially if people’s lives are being put in a position of risk which they did not to worry about before.

        • jamesgeck0 8 years ago

          Maybe her death wasn't in vain, but it was definitely avoidable. If Uber rushes out half-baked driverless cars, fallout from the incidents they're responsible for will cause serious delays to widespread deployment.

          Trading lives to save on R&D time would violate professional codes of ethics in literally any other industry.

          • rightbyte 8 years ago

            Her death was in vain. These kind of fundamental scenarios can be practised with dolls or stunt men on closed test tracks not on public roads ...

            If Uber needs data they could have driven manually. Obvously their obstacle tracking is bad.

          • Fricken 8 years ago

            It wouldn't violate professional codes of ethics in a war, and we're taking wartime casualty numbers on our roads everyday.

            This is the real trolly problem when it comes to the ethics of developing self driving cars.

        • Lewton 8 years ago

          Other big players have taken great pains to ensure that it wouldn’t happen in the first place

          Uber should be banned from doing AI research on public roads

    • tedsanders 8 years ago

      It is not a misuse of statistics for "one data point" to significantly shift our beliefs. Let's do the math.

      Bayesian approach: To make the math really simple, let's assume a discrete prior on Uber's death rate. Say 33% that Uber's cars are much safer than humans (0.1 deaths per 100M miles), 33% that they are equally safe (1 death per 100M miles), and 33% that they are much more dangerous (10 deaths per 100M miles). After observing one death at 3 million miles, your posterior is should update to {safer: 1%, equal: 11%, more dangerous: 88%). This is a substantial shift in confidence.

      Math: http://www.wolframalpha.com/input/?i=(1-10%2F100)%5E2*(10%2F...)

      Frequentist approach: Let the null hypothesis be that Uber's self-driving cars have the same death rate as humans - 1 death per 100 million miles. The odds of Uber killing someone within 3 million miles is about 3%. Therefore, we can reject the null hypothesis with a p value of 0.03. One positive "data point" is statistically significant.

      Statistically, one death after 3 million miles is not proof that Uber's death rate is higher than 1 in 100 million miles. But it is statistically significant, in both a frequentist and Bayesian framework. You have to get really, really unlucky to have a death at 3 million miles if your death rate is 1 per 100 million miles.

      Bottom line: This collision isn't proof, but it's strong evidence. (To go along with all the evidence from crash rates, disengagement rates, engineers working at these companies, and the video of the crash itself.)

    • troupe 8 years ago

      If you do want to extrapolate from their data, it would be worth looking at the total number of accidents--not just fatalities. Insurance companies say people file an accident claim every 18 years on average. If the average miles someone drives each year is 12,000 miles, this means they get in an accident every 216,000 miles on average. If Uber drove 3 million miles, we should expect them to have been involved in about 14 accidents over those 3 million miles if the cars are on par with humans.

      (I'm guestimating on some of those numbers, but the should be somewhere in the ballpark.)

      • cmac2992 8 years ago

        You'd also need to account for disengagements. Some portion of all disengagements were likely to avoid accidents.

        • troupe 8 years ago

          Good point. Although I think many of the disengagements are basically the car saying "I don't know what to do safely" and without a driver to take over it would simply pull over and stop.

    • Lewton 8 years ago

      There’s more than one data point showing that Uber should not be allowed near anything as safety critical as self driving software

      They let a car on the road that couldn’t even stop at a red light ffs

      https://www.theverge.com/2017/2/25/14737374/uber-self-drivin...

      The fact that they were allowed to deploy in Arizona after this is really regrettable

      And it’s totally unsurprising that Uber “got the first kill”

    • URSpider94 8 years ago

      It should be possible to create a Bayesian model of the posterior distribution of fatalities at this point. That distribution will be pretty broad, and not Gaussian, so talking about the mean is somewhat meaningless. Nonetheless, you could certainly compare that distribution with the posterior for human-driven cars and draw a conclusion like: “it is Xx% likely that the Uber fatality rate is at least twice that of a human driver.”

    • falcolas 8 years ago

      One data point might not be enough, but two may be too many for the industry to bear.

      It only took two incidents to shut down the Concorde program.

    • soyiuz 8 years ago

      They did not sample three times and win once. They sampled three million times and won once. Hardly "one data point."

      • dfee 8 years ago

        I believe “one data point” referred to one winning, not three plays.

      • mabbo 8 years ago

        The point remains: it isn't fair to decide what the probability is with only one data point on one side- especially for rare events.

        Would it have been fair if Uber last week were to declare that they have a 0% probability of pedestrian deaths, since they'd never had one yet?

        The goal of these statistics is to predict future outcomes. But with such a small data sample, you cannot fairly predict the future- just as in my lottery example.

        • ggg9990 8 years ago

          What if Uber’s first self-driving car killed a cyclist in its very first mile of operation? Would you find it equally hard to draw conclusions?

        • soyiuz 8 years ago

          It would be fair to approximate it was near zero per however number of miles they drove, just as it is fair to approximate it is one per three million miles today. Think of it the other way, we know with a high degree of certainty the fatalities are not one per mile, for example.

      • Nomentatus 8 years ago

        It still falls under "one swallow does not make a summer" - even though a lot of days of winter preceeded that bird sighting.

      • s0rce 8 years ago

        That is one datapoint - one death.

    • Buldak 8 years ago

      One data point? So if Uber had driven a billion miles with zero deaths, they'd have zero data?

      • mabbo 8 years ago

        If you're looking for the rate at which something happens, for the purpose of predicting events in the future, you need to see the event occur many times before you can fairly estimate it's rate. That's what my example means.

        I'm not defending Uber here- I'm defending statistics!

        With more data, we may discover that Uber cars are 100X worse, not just 25X. Or we may discover they're better. But we don't have the statistical power to make that estimate when we've only seen the event happen once.

        • sulam 8 years ago

          Your knowledge of statistics could use some enhancement. The response referring to poisson processes is a better view of things.

          • mabbo 8 years ago

            > Your knowledge of statistics could use some enhancement.

            In this we agree.

        • slavik81 8 years ago

          With 0 accidents you cannot accurately measure the rate, but you can estimate an upper bound on it for whatever probability of being wrong you're willing to accept.

  • imh 8 years ago

    Driverless cars are also being tested in relatively nice driving conditions. People, on the other hand, drive in all sorts of conditions. X deaths per Y easy driving miles is going to translate to many more than X deaths per Y representative driving miles.

    • jaclaz 8 years ago

      >Driverless cars are also being tested in relatively nice driving conditions. People, on the other hand, drive in all sorts of conditions.

      I would add that people represent (as a whole) various kind of drivers, including the non-expert/fresh licensed, the elderly, the too sure of themselves, those speeding or not respecting road signals (or driving in connection with a crime), the emotionally fragile, those under the effect of alcohol or drugs, those tired by not having had enough sleep, etc., etc.

      The "model" for an autonomous vehicle should be instead a particular "perfect" subset, ideally it should replicate the behaviour of an hypothetical extremely prudent, healthy 30-something with some experience with safe driving, very familiar with the way his/her car handles, having had a nice night sleep, no use of alcohol or drugs, without any external pressure (to arrive in time, to do other chores apart driving) without a cellphone or tablet distracting him/her.

      If we could find this latter kind of drivers (should they actually exist) and isolate from the statistics the amount of incidents they caused, that would be the reference benchmark.

  • gtm1260 8 years ago

    I think focusing on the safety statistics is somewhat of a red herring. Uber wins when you get hung up on how well its self-driving cars drive, because that's something that they can improve. Instead, I think we should focus on the fact that these dangerous machines are being operated by chronically irresponsible companies and because cars in general have issues, not because we expect them to be less safe than human drivers.

  • corny 8 years ago

    Any comparison between self-driven miles and typical human-driven miles has to take into account all the times a safety driver took over driving to prevent an accident. Those self-driven miles have a huge asterisk.

    • mturmon 8 years ago

      This is a good point.

      Taking it a step farther, I’d expect that road conditions in the Uber tests so far have been more benign than average city driving. Less bad weather, straighter roads, etc.

  • mozumder 8 years ago

    It's much, much worse than that, since that 37,461 deaths number includes all deaths, including motorcycle/Truck/SUV deaths, which have higher death rates than passenger cars, perhaps 5x-10x higher.

    A proper comparison in this case is comparing passenger car death rates.

    And then you need to factor in other conditions, such as the fact that weather was clean, and that you should be looking to compare pedestrian/bicyclist deaths, and you see that this incident already throws out wack the death rate for autonomous vehicles.

  • username223 8 years ago

    Given exponentially-distributed distance between fatalities, this would have a 3% chance of happening if Uber cars were as safe as humans. So it's unlikely.

  • ben174 8 years ago

    Well maybe not currently safer than human drivers. I don't think any sound-minded person would claim that they will never be.

    • oldgradstudent 8 years ago

      What do you base this claim on? It sounds more like a statement of faith.

      > I believe with perfect faith in the coming of the Messiah; and even though he may tarry, nonetheless, I wait every day for his coming.

    • spacehome 8 years ago

      Currently we're losing a million lives a year (about a 9/11 per day) to auto accidents. I think there's a serious argument that it's ethical to push autonomous driving onto the road before they're safer than humans if that would help them debug the software faster and advance to superhuman safety levels even a day sooner.

      • stevesimmons 8 years ago

        So you're volunteering to be an unpredictable pedestrian or cyclist around large numbers of early-stage driverless cars? Put your ethics into practice!

        The rationale response for individual cities is to say: "Do your risky testing elsewhere, thank you very much. You may come back here once you are as safe as our average driver."

        • spacehome 8 years ago

          I'm already a pedestrian surrounded by unsafe human drivers, and every day I hate what I see. What I'm talking about is making a tradeoff between risk now and risk later such that my total risk is lower. I'd take that tradeoff, and I think it's the ethical thing for society to do. What you're talking about is NIMBY, which certainly makes sense locally -- better for some other locale to perfect the technology -- but on the societal level I think it's wrongheaded. But don't worry; there's no danger anybody will take my stance on it seriously. Autonomous vehicles will not be widely deployed until they're provably drastically better than humans.

        • 05 8 years ago

          The rational response is to try passing legislation that prevents companies like Uber from (ever?) testing on public roads while allowing responsible companies like Cruise and Waymo a way to prove their competence and be allowed to test/operate.

      • mozumder 8 years ago

        Why would you want to increase the death rate by pushing more autonomous vehicles onto the road?

        • spacehome 8 years ago

          Because doing so will cause issues (like this one) to be found and fixed faster. When enough bugs are fixed that the autonomous drivers are safer than the meatbag drivers, then the death rate will start to decrease. The argument is that you can trade more deaths today for a sooner decrease later. Under certain assumptions, you'll save net lives.

          • Slartie 8 years ago

            You need to have SERIOUS evidence for the claim that autonomous driving will actually be safer than human driving one day to even attempt to make this argument of yours. "Faith in technology" is not "evidence"!

            If your faith is strong enough, you'll surely be welcome if you sign away all of your rights and serve as irrational traffic participant in a city where self-driving cars are tested. You could run through the city all day and produce difficult safety-critical situations by suddenly walking in front of them.

            My faith isn't strong enough for doing that, so I expect lawmakers to protect me from companies like Uber who apparently think it's okay to basically make me their guinea pig in their public experiments. But I'll applaud you if you decide to take the risk on yourself for the (potential, unproven) advancement of the human race!

            • spacehome 8 years ago

              I'm already doing it all day for human drivers to no benefit at all.

              I am pretty convinced that computers have the potential to outperform humans in this, but I'm not really interested in hashing out the debate here; the arguments have all been made more elegantly elsewhere. Though I don't know why you keep ascribing 'faith' to me. I never brought that dirty word up. I reasoned my way into my positions.

              • Slartie 8 years ago

                > Though I don't know why you keep ascribing 'faith' to me. I never brought that dirty word up. I reasoned my way into my positions.

                In that case, what do you base your assumption on that self-driving cars will be able to deal with the complexities of todays' traffic in a significantly safer way than humans? Because even though you might not bring that word up, if you just assume that to be the case because you expect technology will one day be able to make that wish come true, that IS "faith" in technology. Whether you like that word or not doesn't matter.

          • davidgould 8 years ago

            This argument assumes that autonomous drivers will be safer than human drivers. Which is an unproven proposition. It is possible that autonomous driving may never be safer, or that the rate of improvement may be much slower than expected. How many "more deaths today" do you think are reasonable to test this question? One? ten? one thousand? ten thousand?

            • spacehome 8 years ago

              Yea, it does assume that. I happen to think that's true, but it's a separate argument, and one worth having. Humans are really quite bad at driving, so I think a conservative estimate is that autonomous vehicles could cut road deaths by at least 90%.

              Cars are killing by the millions now, so do the math, and I'm willing to tolerate a lot of deaths in the process, contingent on the deaths actually helping researchers and engineers reduce future faults. I would happily take a million deaths right now if it meant driverless tech instantly became available to everyone, but I think there's an upper limit to how many accidents can be realistically examined at once.

              What's the alternative position? That you're for more road deaths in total? Are you also the kind of person who wouldn't pull the lever in the trolley problem?

              • davidgould 8 years ago

                > I'm willing to tolerate a lot of deaths in the process, contingent on the deaths actually helping researchers and engineers reduce future faults. I would happily take a million deaths right now if it meant driverless tech instantly became available to everyone

                For the sake of the discussion, let us assume that this is hyperbole and that "happily" was an infelicitous word choice. That said it raises a few questions:

                - Uber and the other companies presumably are developing this technology for their own private benefit and do not plan to make it "available to everyone". In which case, how many deaths are acceptable?

                - Should they be successful and succeed in developing a safe autonomous driving system should they be compelled to make it "available to everyone"?

                - Are these million doomed citizens volunteers with informed consent or are they to be struck down unawares at random?

                - What if the technology proves more difficult to develop than you anticipate? Suppose that after one million are sacrificed for the greater good it is improved but not yet good enough. Should we then continue with another million or should we abandon the project after only one million fruitless deaths?

                - Assuming success, should we then ban human driven vehicles completely?

                [edited for formating]

              • Slartie 8 years ago

                > What's the alternative position?

                Developing and testing the technology in actual test scenarios (build entire testing cities and fill them up with paid stuntmen and -women, if necessary) instead of in public, until you're able to prove that the tech is statistically at least as safe as human drivers. After that, you can continue testing in public, provided that you deal with any changes to your system responsibly and ensure that the public is not exposed to additional danger because of your testing.

                This is actually very common in the software development world. For production-critical systems, companies go to great length of creating a staging environment as realistically as possible, but fully separated from the production systems. You may get around that effort if the only damage you can potentially do is people not being able to see stupid cat pictures on a social network. But sorry, for life-critical tech, the move-fast-and-break-things approach is irresponsible bullshit.

                > I would happily take a million deaths right now if it meant driverless tech instantly became available to everyone

                What if your own death is guaranteed to be among them? Still "happily taking" it?

                • spacehome 8 years ago

                  Guaranteed, no. But I'd take a 1 million divided by 7 billion chance. It's roughly my odds of dying this year in an auto accident anyways.

          • jamesgeck0 8 years ago

            Why is the FDA so strict about human trials? We could save net lives if we just did drug tests faster to figure out what works. /s

            • spacehome 8 years ago

              You jest, but the comparison is that we do drug tests at all, which we of course do. It's the same calculus. The math works out more in favor of being more aggressive in our pursuit of driverless vehicles as they kill more prime-age, otherwise healthy people.

            • tmd83 8 years ago

              It's slightly different though. Even if FDA was lax, the participants would be volunteers. This victim though didn't have any say in joining Uber's trial.

  • reedx8 8 years ago

    You can't just look at Uber to make a sweeping conclusion of all autonomous cars on the road. How many miles has Tesla, VW, Volvo, Waymo, Google, Ford, and Apple have?

  • akkat 8 years ago

    I don't know how to say it kindly but there is a difference between "what" type of people the car killed. If the fatality was another rule abiding driver on the road or a pedestrian crossing at a crosswalk that would be really bad. However if it was someone not following the safety rules by j walking, then that person accepted upon themselves a higher probability of being in an accident. When making laws, for the most part, they are for the benefit of law abiding people.

  • vcanales 8 years ago

    /s ?

    It's a pretty unfair comparison, with 1 death on one side and over 30k on the other...

    • JohnJamesRambo 8 years ago

      No because the rate is per million miles driven. It allows us to compare two different things. The sample size is one but it isn't looking great so far.

buildbot 8 years ago

I don't understand how there isn't a non-ML based piece of code that looks at moving radar and lidar returns and preforms an emergency brake, light flash, horn, or dodge if that vector would intersect with any confidence. Even slowing down to 20MPH can turn a fatal accident into an injury.

  • KKKKkkkk1 8 years ago

    What if it has nothing to do with ML? You see a point cloud that's moving toward your lane at a speed estimate of say 2 mph. If that's below the sensor noise threshold, you might classify the cloud as a stationary object on the other lane (say a stranded car). In that case, by the time you realize that this stationary object has somehow moved itself into your own lane, it is already too late.

    • galdosdi 8 years ago

      ACTUALLY, in many US states it's not just wise, but it is the law that if a vehicle is stopped in an adjacent lane you MUST either move to a further lane (if you can do so safely in time) or else at least slow down.

      This is because stopped vehicles often have people come out of them to fix the vehicle, and too many cops, tow operators, etc, were getting killed by careless motorists.

      Guess this is news to you (and uber), update your driving style please for the sake of first responders!

    • AlotOfReading 8 years ago

      You slow down anyways, because it will almost always improve the situation and reduce your liability. Secondly, if the vision is so bad that it can't identify objects in its lane or predict vectors, it's not ready for the real world.

      • cmsj 8 years ago

        Yes! This is how a sensible driver would behave - not quite sure what's going on ahead of you? SLOW THE FUCK DOWN.

        The story in the linked article about an Uber whipping through an intersection at 38mph next to two stationary lanes, seems sufficiently conclusive to me that their self-driving system is not ruled by a sense of caution.

        Here in the UK, we have speed limits, but the rules of the road also call drivers to consider "appropriate speed" - you slow down in situations where you might have to react with very little warning. This ought to be extremely easy for an automated system - it can measure its braking distance with high accuracy, it can measure distances to objects around it with high accuracy, it can determine exactly which areas of the world around it it can't see, that could pose a threat with relatively low warning, so just fucking slow down.

        I've long been bearish on full autonomous driving because I consider there to be so many corner cases in real world driving where ad-hoc non-verbal communication is required to solve traffic flow, that the computers would never catch up. Now I wonder if their solution is to just plough through every problem at 98% of the speed limit and then disclaim responsibility.

      • sokoloff 8 years ago

        This will have you unexpectedly slowing for road signs, guardrails, and other stationary objects on the side of the road when they are on the inside of a curve (as they will have apparent motion towards your lane)

        Set "too tightly", it will also have you slowing for every car approaching a stop sign from a side street.

        Cars that randomly slow out of an excess of caution are also a hazard to other road users. Don't believe that? Go drive for a month and set a series of alarms on your phone every 5-10 minutes. Everytime the phone goes off, abruptly slow to half of your prior speed. Do you think you'd make 1,000 miles without causing a road hazard or collision?

        • davidgould 8 years ago

          I'm not sure what side you are on here, are you saying that the autonomous cars cannot be careful as the technology to do so is not good enough? or are you saying that they should not be required to be careful?

          • sokoloff 8 years ago

            I'm saying that a rules based system with "slow down arbitrarily whenever the hell you want" is unlikely to meld well with existing traffic and is likely to cause as well as avoid unsafe situations.

            I believe that might mean that autonomous vehicles are not yet ready for road testing if that is commonly required by the current state of the art. (I last worked on autonomous vehicles in 1991; ours was entirely rules-based and we tested on public roads in addition to private tracks. Ours was bad enough that the human driver hovered over the red E-shut button and was always paying attention. It was harder work than just driving the damn thing yourself, but we had to test in order to make progress. I'm sure loads has improved since then.)

            I also don't think that zero fatalities is a realistic goal nor is it the standard that should unduly inhibit progress. People have been dying in transit on foot, on horseback, on bikes, and in cars. This is a version of the trolley problem. I don't mind and in fact actively prefer a system improvement that allows 100 deaths while saving 500, even if the 100 is entirely disjoint from the counter factual 500.

            In this specific case, I believe the autonomous car allowed 1 death that would have also been allowed by a human driver in the same circumstances, so it's a push.

    • 05 8 years ago

      Not how lidars work. At some point the cumulative movement of the object exceeds a threshold and you reclassify it as non-stationary. You don’t get velocities from LiDAR - only 3D point coordinates.

  • oldgradstudent 8 years ago

    Because it would result in too many false positives, which could be just as bad as not stopping.

    For example, unnecessarily stopping in a middle of a highway is extremely dangerous, especially if visibility is limited or roads are slippery.

sitkack 8 years ago

> We don't need redundant brakes & steering or a fancy new car; we need better software," wrote engineer Anthony Levandowski to Alphabet CEO Larry Page in January 2016.

Looks like Uber has attracted Levandowski due to his cultural fit.

  • comex 8 years ago

    Hmm, but wouldn’t his priorities be correct in the context of this crash? There hasn’t been any suggestion (so far) that the crash occurred because some hardware component stopped working; rather, it seems like the software failed to identify the pedestrian in time. So better software seems precisely what was needed. Though I can imagine that better sensors might also have helped…

    • ddeck 8 years ago

      The issue is not that he wanted better software, it's that he appeared willing to compromise safety to get it faster in order to beat his competitors to market, as is clear from the remainder of that quote:

      "To get to that better software faster we should deploy the first 1000 cars asap. I don't understand why we are not doing that. Part of our team seems to be afraid to ship."

      And from another email:

      "the team is not moving fast enough due to a combination of risk aversion and lack of urgency"

      • toast0 8 years ago

        The rest of the quote is much more powerful. It's pretty irresponsible to ship 1000 self driving cars onto public roads at this point. (Regardless of who is shipping them)

        On the other hand, redundant steering and braking seem like probable overengineering. Brakes are already somewhat redudnant (dual section master cylinders were common in the 70s and are almost certainly in any modern vehicle), and better software could periodically verify they're working and if not, coast to a stop. Steering failure could be handled by engaging the brakes. Simultaneous failure is likely rare and catastrophic anyway -- losing a wheel and having the brakes pressure go with it can happen, and when it does, you put on your blinkers and hope you come to rest in a safe manner.

        • 05 8 years ago

          So, dual action master cylinders are OK by you, but actuators are apparently so much more reliable you only need one of them? And the same goes for the control hardware and power supplies because you are ready to handle power loss in software? I hope you have common sense to stay away from engineering safety critical systems for the rest of your career..

          • toast0 8 years ago

            Dual (or triple? I don't know how many you want) actuators don't help very much if the software doesn't know how to activate them properly (as it seems is the case here).

            You absolutely need a system to ensure a controlled stop in any type of critical failure in ability to control the system. Assuming you have that, it seems reasonable to regularly verify the controls are functional (jiggle the steering, modulate the throttle, gently tap the brakes) every so often, and rely on your controlled stop procedure in the event of failure.

            I do have the common sense to avoid safety critical systems, thanks; however armchair engineering is a national sport.

etimberg 8 years ago

Stuff like this is why P.Engs should be required in certain software fields

matte_black 8 years ago

Why don’t we require software engineers who work on self driving car software to go through licensing and certification.

And then, if their code results in a death, they are liable and can have their license completely revoked, and they would be unable to work on self driving cars again.

  • superfrank 8 years ago

    - Expecting engineers to always write perfect code is insane. Mistakes happen.

    - If bad code makes it into production, that is a systemic failure not an individual one (Why didn't the bug get caught in code review, QA, etc.)

    - No one is going to want to work on a project where a single failure can taint their career.

    - What if I use a 3rd party lib and that is where the bug is. Who is at fault then? What if the code isn't buggy, but I'm using it in an unexpected way because of a miscommunication? If I am only allowed use code that I (or someone certified has written) development is going to move at a snails pace.

    - What if I consult with an engineer who doesn't have a certification on a design decision and the failure is there, who is at fault?

    - What if the best engineer on the project makes a mistake and ends up banned? Does he/she leave the project and take all their tribal knowledge with them, or are they still allowed to consult? If they can consult, what stops them from developing by proxy by telling other engineers what to write?

    Not to be a dick, but this is an awful idea that would basically kill the self driving car.

    • davidgay 8 years ago

      > - Expecting engineers to always write perfect code is insane. Mistakes happen.

      In safety-critical fields, setting a much higher quality bar than the regular 'it seems to works, the tests pass' seems perfectly rational to me. We can now write provably-correct C compilers (CompCert) and OS kernels (SeL4). There's no excuse for not putting similar levels of effort[0] into something as safety-critical as self-driving cars.

      [0]: Note that I'm not advocating for "provably-correct self-driving car software" (that may not be the right approach, as a formal spec is likely unrealisable), but find the argument that "it's ok to write buggy spreadsheets, so it's ok to write buggy self-driving cars" to be morally unacceptable.

    • PinguTS 8 years ago

      There is ISO26262 for automotive about safety.

      Yes, there are reviews, QA, and all of that. So, yes, there is no single person responsible (exceptions apply).

      But there is no excuse for using 3rd party libs. Just don't use it. If you not know: do not use it.

      That is the reason, why certifications are for. The same rules apply for medical and other areas.

      • colatkinson 8 years ago

        > But there is no excuse for using 3rd party libs. Just don't use it. If you not know: do not use it.

        Wait, what? That goes against one of the core benefits of open source software--that having many eyes on a problem decreases the risk of bugs. I'm willing to bet that if Uber had to implement their own machine learning/vision libraries from the ground up, there would be significantly more issues.

        • henrikeh 8 years ago

          Is there any evidence that this is the case? That simply putting more eyes on a system will reveal its problems?

          Certification etc is about process. Open source code can be used in a safety critical product, but it must audited and confirmed against the system requirements.

        • PinguTS 8 years ago

          Please do not get me started with the argument over "it is open source, so there are many eyes who have seen the code".

          The problem with that is, nobody audits code, if it is working just for fun. And even if it is buggy, then most people look for bugs in their own software and then they work around, so that the original piece is not modified.

          We have this seen in many open source projects. Remember all the obvious, mostly security related, bugs that weren't uncovered for years. They weren't uncovered because everybody thought: "huh, that is hard. I assume that other more experienced than I will have reviewed it, so I will trust it."

          The thing with certification is, that it is required that it is really reviewed. That there is a guarantee that it is reviewed. That there were people with a different mind set, with different background have reviewed it and as such have brought in their own view.

          Certification does not guarantee that something bug free. It guarantees only, it is reviewed. Open source has no guarantee that it is reviewed. There is only hope, that someone has reviewed it.

        • Schwolop 8 years ago

          In ISO13485 medical grade software (certain levels of it anyway), the same concept applies. Anything not written in house is "SOUP"; Software of Unknown Provenance. You're required to pass that through a review process before using it, and in many instances it's not worth the effort to review compared to instead just re-writing it yourself.

      • superfrank 8 years ago

        > But there is no excuse for using 3rd party libs. Just don't use it. If you not know: do not use it.

        Pretty much everything in development relies on the work of other people. I used 3rd party lib just as an example, but what if it's in the framework or even the language that an engineer uses, who would be at fault then? You can't expect every developer to have gone through the entire source code for whatever language they are writing in.

        Sciences build on each other and and after a certain period of time you have to take things for granted in order to keep moving forward.

        > The same rules apply for medical and other areas.

        No, they don't. Doctors kill patients all the time and they aren't banned from medicine for it. There is an investigation, they make sure it wasn't intentional and there wasn't any gross negligence and that this isn't a repeating pattern, if none of those are the case they see what they can learn from it and move forward in hopes that what they learn can help other doctors.

        • Glawen 8 years ago

          That's why in iso 26262 you can only use certified compiler and tools. Rust compiler is not certified for example. Sure you can use 3rd party libs, but they must be certified.

          I agree dith parent: never use copy paste from internet in safety critical SW, anyway it most probably isn't designed for your use cade. Personally I always have been disapointed by copy pasting stuff, it was always buggy somehow. In the end I always reimplemented it from scratch by reading the theory

    • velobro 8 years ago

      Structural engineers are liable if the building they design collapses. I don't see why software engineers in safety-critical fields should be any different.

      "The engineers employed by Jack D. Gillum and Associates who had "approved" the final drawings were found culpable of gross negligence, misconduct and unprofessional conduct in the practice of engineering by the Missouri Board of Architects, Professional Engineers, and Land Surveyors. Even though they were acquitted of all crimes that they were initially charged with, they all lost their respective engineering licenses in the states of Missouri, Kansas and Texas and their membership with ASCE.[22] Although the company of Jack D. Gillum and Associates was discharged of criminal negligence, it lost its license to be an engineering firm." - https://en.m.wikipedia.org/wiki/Hyatt_Regency_walkway_collap...

    • simion314 8 years ago

      I think the company must pay a lot if a failure happens, if NASA makes a mistake that will costs so they make sure to think hard, have good processes and do a lot of testing.

      From what we seen so far the Uber car failed to detect an obstacle, we also had Tesla crash where the car did not seen a truck so it is obvious that there are some major issues that are not tested for. They need to have better tests and maybe get better security drivers inside the cars so they don't text on the phone on the job.

    • chopin 8 years ago

      So you can't be bothered to vet 3rd party software for safety critical uses? That sounds like a no-brainer to me. You can't just slap some NPM libraries together for that use case.

    • InclinedPlane 8 years ago

      This is a straw man. Nobody is asking for perfection. What is needed are processes and regulations that ensure a high confidence that companies with self-driving cars have done the best that can be expected of them to ensure their vehicles are safe and that they have met a minimum bar in that regard as well. There are many industries where this already happens, there's no reason to single out self-driving automobiles as being impossible to regulate properly.

      • always_good 8 years ago

        I can agree with this, but if a company wants to put self-driving cars on the street, I don't see how certificated engineers on their staff is the thing that matters.

        Does that demonstrate that the cars are safe? Even a little bit?

        To me it just demonstrates that a grunt is on the chopping block for what amounts to systemic failure.

  • harshbutfair 8 years ago

    From many years developing safety critical software, I reckon culture and processes are more important than certification. There are various standards for developing safety systems in other industries (defence, aviation, etc) and these standards exist for a reason. Have Uber applied any standard for their automation software? Or equivalent development processes? "Move fast and break things" is fine for an app, but not fine for controlling a vehicle.

  • lhorie 8 years ago

    My guess is that it's because the field is so new that there aren't really any experts that can define what are reasonable rules for said licensing and certifications

    • ghaff 8 years ago

      They exist. They're required for some roles:

      https://insight.ieeeusa.org/articles/professional-licensure-...

      • Silhouette 8 years ago

        But what qualifies the people administering those examinations to judge how good someone is at writing reliable software in safety-critical environments?

        • slavik81 8 years ago

          The examiners don't judge that. They just check that you meet the educational requirements, that you did an apprenticeship under supervisors who have a good safety record, and that those supervisors think you are qualified. Then you get your license. If there's a safety incident relating to your work and you were acting recklessly, you may lose your license.

          That's the whole process. Repeat that over a long enough period of time and you tend to select for a more competent, safety-conscious group of engineers.

          Over the long term, its survival of the fittest. Reckless engineers have safety incidents and get barred from both working and from supervising new engineers. So, it tends to be the safer engineers who get to pass on their work culture to the next generation of engineers.

      • wsh 8 years ago

        The software engineering P.E. exam is being discontinued after the April 2019 administration, due to low turnout: only 81 candidates since 2013.

        https://ncees.org/ncees-discontinuing-pe-software-engineerin...

  • PinguTS 8 years ago

    We have this requirement, at least in Europe. There are even ISO standards to follow. The one related is ISO26262. But it seems, this does not apply to those permits issued for these cars by Uber.

  • mr_toad 8 years ago

    Engineers aren’t in charge. Unlike lawyers, surgeons, even dental hygienists, they aren’t making the calls.

    • ilaksh 8 years ago

      It may have been an executive that just said to turn off LIDAR for testing. Then the engineer probably mentioned it wasn't ready for live testing, was overruled, did it knowing it wasn't ready, because if he refused he may have been fired.

      • bigger_cheese 8 years ago

        One of the things I was taught whilst studying engineering (in Australia) was if you, whilst acting in capacity as a professional engineer certify something knowing it is unsafe then you can be found personally liable.

        Likewise if you knowingly observe anyone else in your company breaching safety/regulatory guidelines then as a professionally certified engineer you have a legal responsibility around ethical disclosure.

        See: http://www.professionalengineers.org.au/rpeng/ethical-commit...

        I do not know how things work in the US but in Australia these rights are protected by law. The company legally can not fire an engineer in this situation.

        • tonysdg 8 years ago

          Professional Engineers (note the capital 'E') are protected in the U.S. by such laws.

          Professional engineers (note the lowercase 'e') are usually not protected in the U.S. -- they're regular employees whose profession happens to be engineering.

  • namelost 8 years ago

    Obligatory: https://www.fastcompany.com/28121/they-write-right-stuff

    If you want error-free software you need a blameless culture based around process, not individual ownership of code. It should not even be possible for an error to be one individual's mistake, because by the time it hits the road it should have gone through endless code review and testing cycles.

  • ubernostrum 8 years ago

    Uber will relocate the engineering team to a jurisdiction without those regulations.

    Just like it relocated its testing to get a more "business friendly regulatory environment".

  • KKKKkkkk1 8 years ago

    There are existing laws on the books for this. Please google negligent homicide. Licensing and certification serve a different purpose.

  • SamReidHughes 8 years ago

    Then we would never have self-driving cars.

aylons 8 years ago

"Move fast and break things" is exactly the opposite of what a responsible driver must do.

  • dmix 8 years ago

    Not sure how that could be the philosophy of any self-driving car company?

    That'd be extremely foolish. And regardless of the dumb things the previous Uber CEO has done in the past and the big deal people are making over a $150 license, they have still hired some of the best engineers in the world.

    You basically have to find the brightest-of-the-brightest to build AI... and Uber pays very well and puts plenty of effort into recruiting that talent.

    Not to mention the massive PR and monetary risks that are inherent in killing people with your products. That would make any company highly risk-adverse.

  • telchar 8 years ago

    I've been joking for at least a year that Uber's motto is "move fast and break people". I'm saddened that this has come to pass but not surprised.

sureaboutthis 8 years ago

I have two problems with this article.

1) They make it appear that Uber is a car manufacturer.

2) Even though Uber has not been determined to be at fault, the author seems to want to make it that way anyway.

  • TillE 8 years ago

    Every engineer on this project at Uber knows very well that their car completely failed in one of its most basic expected functions. It's incredibly obvious, and a number of independent experts have said as much.

    I'd be fairly surprised if there's any real appetite at Uber to continue with this now. It was never anywhere near their core competency.

    • tantalor 8 years ago

      > core competency

      3 years ago Uber hired ~50 specialists from CMU to work on autonomous vehicles. I'd call that a core competency.

      https://www.theverge.com/transportation/2015/5/19/8622831/ub...

      • notyourwork 8 years ago

        Core focus, competency implies being competent. With this I’m less convinced.

        • neom 8 years ago

          My measure of core competency is capability + capacity. (Tesla and Ford, for example, do not have the same core competencies, and to your point, have core focus in each of the others.)

    • saas_co_de 8 years ago

      Interesting question is if the LIDAR was not being used because of the settlement with Google and agreement not to use any of the contested tech.

    • Silhouette 8 years ago

      It was never anywhere near their core competency.

      Maybe not directly, but is Uber's current business model sustainable without some form of self-driving technology replacing their human drivers?

      • JimmyAustin 8 years ago

        In the short term, yes, but the moment someone else gets a self driving car they’ll be destroyed.

  • CydeWeys 8 years ago

    The article addresses your second point:

    "Indeed, it's entirely possible to imagine a self-driving car system that always follows the letter of the law—and hence never does anything that would lead to legal finding of fault—but is nevertheless way more dangerous than the average human driver. Indeed, such a system might behave a lot like Uber's cars do today."

    It doesn't matter if Uber makes cars that are technically not at fault, if they're mowing over pedestrians at a rate significantly higher than human drivers then they should never be allowed on public roads. People mess up occasionally. The solution is not an instant death sentence administered by algorithm.

    • sureaboutthis 8 years ago

      That paragraph is opinion of the author of the article which is my complaint. And, again, your second paragraph is an issue I also have. That people are stating that Uber is a car manufacturer.

      • CydeWeys 8 years ago

        I'm not sure I understand your objection. If Uber's cars are killing people at a much higher rate than human drivers, then that's a huge problem. They shouldn't be allowed on the roads at all, as they'd vastly increase traffic deaths if widely used. Whether or not someone is at fault in a given accident only matters to the insurance company; that person is still dead. What is your counter-argument? That it doesn't matter if many more people die with Uber self-driving cars on the road so long as everyone who dies made a mistake?

        And whether Uber makes the entire car or not isn't germane to the discussion. They are responsible for the safety of said cars, which is what we're discussing here.

        • sureaboutthis 8 years ago

          Your first paragraph, here, doesn't matter. He is stating opinion as if it were fact and what he said (not you now) isn't fact at all.

          Your second paragraph emphasizes my point. You,too, are stating that Uber is a car manufacturer, in whole or in part. Does Uber manufacture any parts of this car at all? The impression the opinion piece gives is that Uber manufacturers cars.

          • CydeWeys 8 years ago

            This is a pointless semantic argument. Uber did write and is responsible for the self-diving software involved, which is what's at issue here.

  • vamin 8 years ago

    The author is making a distinction between whether Uber was legally at fault (as stated in the article, likely not) versus whether the accident was avoidable. I agree with the author's position that the accident was likely avoidable.

    • femto 8 years ago

      I think there is an argument for removing this ambiguity by making self driving cars automatically liable for all personal injury, whether the person be inside or outside the car. The only exception would be if the tester could prove intent by the other party, keeping in mind that a self-driving car will have extensive logs of its environment to use in such a defence. Supporting arguments include:

      1) The physics of driving isn't random, so it could be said that there are no accidents in autonomous driving, only oversights.

      2) It would set a minimum performance level by making it prohibitively expensive to have a dangerous car. Those who test responsibly would have a low enough injury rate that they could deal with the risk by taking out suitable insurance.

      3) It would provide a strong incentive to make the best car possible and not to take expedient shortcuts.

      4) Over time automatically liability would become irrelevant if it asymptotically forces the injury rate to zero.

      5) We have an historic opportunity to create a culture that will eliminate the danger of cars. It might have an increased short-term once off cost, but a huge long term payoff in the reduction of health costs and human misery. If we miss this opportunity we will be stuck with the long term cost of an industry that will be competitively driven towards poorer performance, potentially against the will of the majority of players, by the actions of a few.

      • URSpider94 8 years ago

        Your concept that there are no accidents, only oversights, is not correct. Say I’m crossing the street and I don’t see your car coming, because I look the wrong way or I’m distracted, or my view is blocked by the sun. You may not be able to avoid hitting me.

        Additionally, if we train self-driving cars to always give way to pedestrians who even look like they might cross the street, they’re going to have a heck of a time getting through cities. Kids are going to learn that they can trigger a squealing emergency stop by lunging towards the curb - great fun!

        What I think WILL happen is that autonomous cars will have to buy blanket insurance policies that cover their entire fleet. High accident/fatality rates will result in high insurance premiums, which will put bad actors out of business.

        • femto 8 years ago

          I think your first paragraph is written from the perspective of a human driver. The machine can predict based on physics. If anything a person who does not see a car will be more predictable, not less, as they will tend to be moving uniformly and not taking evasive action.

          Taking into account optimal (computer driven) car stopping distances, the maximum acceleration of a person and typical pedestrian densities, I don't think getting though cities would be an issue, especially outside the CBD. Even in areas with very high pedestrian traffic cars will be able to get though. In support of this argument, I offer today's "shared" pedestrian zones, where cars and people mix. Pedestrians have right of way, cars are limited to 10km/h, but the cars manage to get though without injuring anyone. Cars would naturally do high speeds on main roads with low pedestrian density and lower speeds (with very short stopping distance) at high pedestrian density.

          Why should children have a propensity to intentionally jump in front cars above and beyond anyone else? That's bias.

          If anyone intentionally jumped in front of a car then it would be covered by the exemption that I proposed: that the car would not be liable if intent could be proved. Based on the car's sensor logs it would be pretty easy to prove that someone intended to get hit. If the car managed to stop and the person ran away it would then be relatively easy to track that person down based on the logs and charge them with a crime. In any case, I think that that is a hypothetical situation which is unlikely to occur. For the vast majority of people self preservation would trump the desire to cause trouble by putting oneself in a terrifying and life-threatening situation, so I think it would be negligibly rare.

          • URSpider94 8 years ago

            My point is not that children would jump in front of cars. It’s that they would run toward the curb with a heading and velocity that would dictate a collision, then stop just before entering the street. They would do it for the main reason that children do things - because it would be hilarious to watch self-driving cars leave a patch on the road and jerk their passengers back with seatbelt pretensioners. If you don’t actually set foot in the street, you’re not breaking the law.

            Changing the average travel speed from 45 kph (the city-wide speed limit in New York) to 10 kph would be a disaster.

            • cmsj 8 years ago

              This makes no sense.

              If I see a kid running towards the road, I'm going to slow or stop my car. I'm not going to think "oh it's just a kid being a jerk, they'll definitely stop before the road" because I don't want to run the risk of squashing a kid.

              Therefore, I behave exactly the same as a cautious AI would be expected to behave.

              So, why aren't kids running to the edge of the sidewalk when I'm driving?

            • henrikeh 8 years ago

              In most countries obstructing the flow of traffic, no matter the method, is illegal.

              And what alternative do you suggest? Not braking?

        • powercf 8 years ago

          > Your concept that there are no accidents, only oversights, is not correct. Say I’m crossing the street and I don’t see your car coming, because I look the wrong way or I’m distracted, or my view is blocked by the sun. You may not be able to avoid hitting me

          Solvable by reducing speed sufficiently. It's reasonable to expect the car to avoid a man travelling at 10kph from any off-road blind-spot, and a car travelling at 100kph from any on-road blind-spot.

          > Additionally, if we train self-driving cars to always give way to pedestrians who even look like they might cross the street, they’re going to have a heck of a time getting through cities. Kids are going to learn that they can trigger a squealing emergency stop by lunging towards the curb - great fun!

          If you sprint to the curb in front of traffic today, drivers will stop/swerve. Almost certainly illegal too.

          My interpretation of the parent post is that more responsibility can be put on the car to avoid accidents, than is currently the case today. Hence greatly increasing road safety. It sounds great!

          • URSpider94 8 years ago

            It’s not a blind spot I’m worried about. People frequently walk at full speed up to the edge of the sidewalk, then stop just before they would walk into traffic. Should an autonomous car assume that any pedestrian walking towards an intersection is going to continue into the roadway, even if they don’t have the right of way? That’s not what a human driver does.

            Likewise, I can be standing still with my toes on the curb, and then lunge into the street. Should a self-driving car assume that every pedestrian standing at a crosswalk could walk into traffic at any moment, and slow down accordingly? Again, that’s not what human drivers do.

            There are a number of surface streets near my house with speed limits of 45 mph, and crosswalks every 1/8 mile or so. Requiring cars (autonomous or not) to avoid any possible pedestrian incident at every such intersection would be a disaster for traffic throughout and a huge step backwards.

            • DanBC 8 years ago

              > That’s not what a human driver does.

              A human should be aware that these pedestrians might enter the roadway. The human should perceive those pedestrians as a risk, and be ready to take action.

              > Should a self-driving car assume that every pedestrian standing at a crosswalk could walk into traffic at any moment,

              Yes.

              > and slow down accordingly?

              This doesn't follow. The car don't need to slow down. It does need to be ready to perform an emergency brake.

              • URSpider94 8 years ago

                My bigger point, which is getting lost, is that there are rules to traffic (pedestrian, bike and auto) that we are all expected to obey. I do not agree that we can simply assume that a self-driving car assumes all liability for any accident (which is what the parent to my original comment posited). The rules of the road let us operate vehicles at tolerances that make it physically impossible to avoid every kind of collision - for example, as I’ve mentioned, a pedestrian that suddenly sprints into cross traffic traveling 40 mph.

                I fully agree that autonomous vehicles can and should do everything they can to avoid accidents. We are in violent agreement there. However, I also think that if we set some kind of unrealistic standard for safety, then we are going to make self-driving vehicles completely unappealing to everyone, because they are going to drive like a cross between my grandmother and a startled squirrel.

                This is not a new idea of mine - the issue of the too-polite autonomous car has been extensively studied and reported on. See https://www.google.com/amp/s/mobile.nytimes.com/2015/09/02/t... for just one example.

                • cmsj 8 years ago

                  > they are going to drive like a cross between my grandmother and a startled squirrel.

                  That genuinely might not be a bad thing, at least in the initial phases. They should be ruled by an abundance of caution until we're sure they can actually make more aggressive decisions.

                  Uber seems to be defaulting to maximum aggression from the outset, which is hardly surprising from them, but seems extremely over-confident (in fairness, like most new drivers are ;)

                • DanBC 8 years ago

                  > is that there are rules to traffic (pedestrian, bike and auto) that we are all expected to obey

                  The point of defensive driving is that you can't rely on other people not to be incompetent.

                  You have a green signal at a traffic light. That does not mean "go", it means "proceed with caution".

        • stordoff 8 years ago

          > Kids are going to learn that they can trigger a squealing emergency stop by lunging towards the curb - great fun!

          Isn't that already the case (and arguably they'd get more satisfaction out of it in some cases by seeing an annoyed driver)? Further, I'd say that's what _should_ happen -- if someone looks like they are going to enter the road, you stop.

        • telchar 8 years ago

          You're suggesting that self-driving cars may have to drive slowly and carefully in cities. I don't really see a problem with that. Pedestrians aren't frequently in the immediate vicinity of roads with limits higher than 35mph in my experience. Any human driver has to drive cautiously in the vicinity of pedestrians anyway, if they don't want to risk manslaughter. I'd want SDVs to do that too.

          • URSpider94 8 years ago

            Slowly and carefully, yes. Assuming that any pedestrian could leap in front of your car in violation of traffic laws - no.

            Where I live in California, there are two major streets with crosswalks in regular use, with speed limits of 40 mph or higher (and travel speeds of 50+ mph), within a quarter mile of my home.

    • Guvante 8 years ago

      I think even if you think the accident was unavoidable it is certainly damning that the car never attempted to stop.

    • chopin 8 years ago

      In Germany, when an accident is avoidable, you are 100% at fault (as it should be) criminally and civilly. Isn't that the case in the US? I'd be astonished if that were the case.

      If you enforce your right of way and kill somebody, that's manslaughter in my book.

  • hndamien 8 years ago

    I think the standards are different in this case. While the pedestrian definitely should not have been where they were, and if this were an incident with a human driver, you would probably say the driver was not at fault, I think this is slightly different.

    They are on the road with conditions because what they are doing is somewhat experimental still. There is a safety driver for a reason that did not respond. A human driver may have collided but would have responded and potentially avoided a fatality (if not a collision). The benefits of autonomous driving completely failed on all counts in this case, which imply that being on a public road is far to early for Uber - suggesting some fault to lie with Uber or the regulators.

  • saas_co_de 8 years ago

    The other missing part is that it is the human driver who is responsible. This is a test vehicle and their job is to be ready to take over at any time as if they are driving.

    It seems unlikely that the Police will find any fault because they probably don't want to have to file a criminal charge against the driver, but that is who it would go against if there was fault.

    • foobarian 8 years ago

      It surprises me how downplayed this is. If you legally treat the Uber car as an ordinary car that happens to have a really fancy auto-assist, the driver should be on the hook. This person had eyes off the road for 5 seconds prior to the crash according to the article.

      There was a case some years ago in Boston where a subway rear-ended another one because the operator was texting. The driver was fired and probably would have been prosecuted if there had been fatalities. Taking eyes off the road for this long seems insane to me.

    • Nomentatus 8 years ago

      One more person who had no trouble taking the salary, but just didn't want to do the job. Fraud, on their part IMHO.

  • Tobba_ 8 years ago

    Yeah I'd be fairly concerned about them lying to or simply bribing the police too.

joejerryronnie 8 years ago

Why is everyone considering it a forgone conclusion that self driving cars will quickly become much, much safer than human driven cars? Yes, lots of people die every year in human driven car accidents. But it is equally true that our most sophisticated AI/ML can only really operate within very narrowly defined parameters (at least when compared to the huge sets of uncertain parameters humans deal with every day in the real world). Driving is perhaps one of the most unpredictable activities we can engage in, anecdotally supported by my daily commute. What if our self driving software never becomes good enough? How many more deaths are we willing to go through to find out?

speedplane 8 years ago

I was at SXSW a few weeks ago and went to an Uber driverless car talk. They spent the first half of the talk discussing driver safety, it felt incredibly hollow.

If you really cared about safety, there are far more immediate and impactful solutions then spending billions on self-driving cars. If they came out and said that they were doing it to make money or make driving easier, it would have carried more weight. But you just can't trust a word this company says.

RcouF1uZ4gsC 8 years ago

Are you suprised? Uber is a company that:

* Flouted Taxi regulations

* Living in legal gray zones in regards to contractors vs employees

* Designed a system to avoid law enforcement

* Performed shady tactics with its competitors

* Illegally obtained the private medical records of a rape victim

* Created a workplace where sexual harassment was routine

* Illegally tested self-driving cars on public roads in California without obtaining the required state licenses.

* Possibly stole a LIDAR design from a competitor

Now their vehicle killed a pedestrian in a situation that the self driving vehicles should be much better than humans at (LIDAR can see in the dark, and the reaction time of a computer is much better than humans.)

Uber has exhausted their "benefit of the doubt" reserve. Maybe, they need to be made an example of with massive losses to investors and venture capitalists as an object lesson that ethics really do matter, and that bad ethics will eventually hurt your bank account.

  • dsfyu404ed 8 years ago

    >"One of my big concerns about this incident is that people are going to conflate an on-the-spot binary assignment of fault with a broader evaluation of the performance of the automated driving system, the safety driver, and Uber's testing program generally,"

    Self driving cars are currently in that state where they're always in accidents but never technically at fault. When individuals have this behavior patter their insurance company drops them because if they're so frequently present when shit hits the fan they're a time bomb from a risk perspective.

    Edit: meant to reply to parent, oh well.

  • Stanleyc23 8 years ago

    if they are at fault they should be punished, but you do realize that the expectation for self driving vehicles is not to eliminate all car related deaths, right?

    edit: wow this triggered some people. somehow 'if they are at fault they should be punished' got interpreted as 'they are not at fault and should not be punished'

    • bobthepanda 8 years ago

      From the article:

      > But zooming out from the specifics of Herzberg's crash, the more fundamental point is this: conventional car crashes killed 37,461 in the United States in 2016, which works out to 1.18 deaths per 100 million miles driven. Uber announced that it had driven 2 million miles by December 2017 and is probably up to around 3 million miles today. If you do the math, that means that Uber's cars have killed people at roughly 25 times the rate of a typical human-driven car in the United States.

      We have a sample size of 1, granted, but it's not looking very good. At the very least they were expected not to be less safe than humans.

      • clairity 8 years ago

        > "We have a sample size of 1..."

        i'm not sure that's the right way to look at it (that uber is a sample of 1, or that the death is a sample of 1).

        in this case, the metric is deaths per mile, so there are purportedly 3 million samples for uber self-driving cars, with one positive (negative?) result in that sample. you need so many samples because the positive observation rate is expected to be very low (as evidenced by the 1.18 deaths per 100 million miles driven by human drivers).

        if you assume the death rate is roughly the same, you can (roughly) estimate the expected error or confidence interval with the 3 million sample size versus a 100 million known rate for human drivers. as more samples are gathered, the confidence interval gets tighter: if the confidence interval currently stands at 80% with 3 million samples (made up numbers), it might go up to 85% with 6 million samples.

      • roenxi 8 years ago

        You have a sample size of 1. Acknowledging that doesn't suddenly make the sample evidence of anything, good or bad :P.

        Almost all the miles driven are going to be in near-ideal circumstances (daylight, no rain, good road surface, driver familiar with normal road traffic conditions and drives the route regularly). I have nearly no insight into the uber death, but I gather it happened at night. It could easily be that humans are also an order of magnitude more dangerous at night.

        • XMPPwocky 8 years ago

          Of course it's evidence of something!

          Suppose, as a massive oversimplification, Uber's self-driving cars crash with some constant probability P for every mile driven (i.e. a Bernoulli process).

          We now have learned at least one thing with absolute confidence: P > 0.

          The first mile driven before the accident, of course, also showed P < 1.

          But beyond absolute certainty, we also have a better idea of the actual value of P. (Intuitively, the longer we go without a crash, the lower we suspect P to be, and for every crash, we increase our estimate of P).

          If there was only one crash in 3 million miles driven, this is evidence for values of P near 1/(3 million), and evidence against values of P far from it.

          Is it strong evidence? Nope! But it's evidence!

          • roenxi 8 years ago

            You obviously want to be technical. Technically you are correct. However, the evidence that this car is a worse driver than a human is currently so weak we're both wasting time talking about it. We need more of the stuff for it to be worth considering.

            Your pedantry has managed to upset me and I would encourage you to be a little more understanding of people using language in the way it is used outside of the world of mathematics. Walking into a practical discussion of safety with an existence proof of all things is disrespectful of the fact that lives and enormous quantities of human attention are at stake.

            Obviously, technically everything is evidence of something. I know that. Using the language in that sense is not going to help.

        • Nomentatus 8 years ago

          It just takes one fish in the milk, to paraphrase Thoreau.

          Something happened that just shouldn't be possible, with Lidar and half-competent AI - which we use in good part because cameras don't handle low-contrast nearly as well as human eyes, say at night.

        • bobthepanda 8 years ago

          The singular of data is an anecdote, as they say.

          > It could easily be that humans are also an order of magnitude more dangerous at night.

          Given that the actual circumstances of the death (pedestrian crossing left to right in a large road with the car in the rightmost lane) was a case that autonomous cars should've been much more equipped to deal with, and the Uber clearly failed, it's not promising. Autonomous cars were supposed to be better than humans; this particular one does not seem to be, given that as a human I can pretty clearly see someone crossing from the far side of the road.

        • simion314 8 years ago

          I am not the OP, but this numbers are not confirming the optimistic view that self driving cars are better, so if we do not have numbers that show that are better and even with a human driver inside we had a few incidents then the number without human driver inside would have been larger, so how should we determine that the self driving car X is ready to be released on public roads for testing? At least we should have some basic tests done by an impartial authority.

        • rhizome 8 years ago

          I'm not sure that's the point you want to make, an order of magnitude making Uber 2.5x more dangerous still.

    • simion314 8 years ago

      There are only 16 cars (or something like that) + safety drivers in the car, so 16 cars we have 1 death, normalize this to the number of human driven cars and you will get a lot of deaths, I do not have the numbers but 1 in 16 cars killing a person is a large number, now remove the safety driver and you get even more deaths.

      The point is that it is to early to test this cars on public roads. The fact that self driven cars will kill less is just a hope, we may never achieve that(there is no proof we can do it with current tech) or it will take more time, and we need some actual numbers that are not tampered with before letting this self driven cars on the road, the security drivers seem to not pay attention so this car testing are a risk.

    • ubernostrum 8 years ago

      The expectation is that a car that can "see" in a broader part of the spectrum than a human should have, y'know, detected a person in the street ahead, when there were no obstructions between the car and the person.

      There have been cars on the market for years now that can detect a dangerous situation ahead, even in the dark, and auto-brake. If they haven't caught up to several-years-ago's consumer model of car, then what on earth are they doing putting these things on public roads?

    • notahacker 8 years ago

      I think it's fair to set the minimum expectation of not being much, much worse in a situation where decision making is utterly straightforward and where sensors ought to give it a significant advantage over humans. It also would appear that the vehicle fell well short of that minimum standard on this occasion.

    • PinguTS 8 years ago

      Almost all current assitance systems in luxury cars, which are not design for automated driving, could have avoided this situation with activation of the emergency brake. But this car didn't even brake at all?

      It will be interesting when Volvo pulls back from that relationship, because even the uptodate emergency brake assistent should have catched up. Except Uber as deactivated those in favor for their own technology (because it is superior).

      EDIT: German Automotive Club did tested full sized cars in 2016, which means decaded old technology in IT. The results where that the Subaru Outback was the only one detecting persons in the dark. But was very poor with biclysts. https://www.adac.de/infotestrat/tests/assistenzsysteme/fussg...

      • neffy 8 years ago

        I think a lot of us are of the opinion that it wasn´t as dark as the camera makes it look. But let´s say it was... a human driver under those conditions would have full beam headlights on, and I´ve never personally experienced any issues spotting cyclists and dark clad pedestrians with those on.

    • XR0CSWV3h3kZWg 8 years ago

      Driverless car programs should really not be having more than 1 fatality per 100 million miles. Uber is currently at 1 per 3 million miles.

      • mr_toad 8 years ago

        A sample size of 1 is completely meaningless.

        • TillE 8 years ago

          It's not, because we have considerable detail about that one sample. This wasn't simple bad luck, it was performance on par with a completely distracted driver.

        • XR0CSWV3h3kZWg 8 years ago

          Why do you say it's a sample size of 1? They've been driving AVs for a while.

    • mannykannot 8 years ago

      That general statement does not give Uber a pass on this particular incident, any more than this particular incident casts aspersions on the autonomous vehicle industry as a whole.

throwaway010718 8 years ago

Machine learning and AI are data hungry algorithms and the concern is there isn't enough "emergency situation" data. Also a detector can not have both a 100% probability of detection and 0% probability of false alarm. You have to sacrifice one for the other and that is usually influenced by weighted probabilities and priorities (e.g., a smooth ride).

d--b 8 years ago

Uber's culture is bad for anything really...

kristianov 8 years ago

I hope on-road testing could be more like human drug testing. After all, both affect human lives.

  • oldgradstudent 8 years ago

    Drugs are tested under informed consent, not on unwilling third parties.

    Maybe testing of autonomous vehicles should be done off public roads (at least at this stage of development).

ghfbjdhhv 8 years ago

This event has me thinking about job of the behind-the-wheel backup driver. They get an easier job than a real driver, at the cost of potentially taking the fall if an accident occurs. I wonder if the pay is better.

  • TylerE 8 years ago

    I actually don't think it's really easier. Continuous attention is easier to maintain than hours of boredom only to have to react out of nowhere...maybe.

  • icc97 8 years ago

    It's an equivalent job to a train driver

icc97 8 years ago

I thought the speed limit was 35mph, but the article claims 40mph.

bambax 8 years ago

"Testing" of driverless cars seem to be the wrong way around. Software should try to learn from human drivers: watch them instead of being watched by them.

The way it would work would be: the human is driving and the software is, at the same time, watching the driver and figuring out an action to take. Every time the driver's and the software's behavior differ, is logged and analyzed to figure out why there was a difference and who guessed better.

But the way testing is currently going on, it seems millions of miles are wasted where nothing happens and nothing is learned.

  • Animats 8 years ago

    No, what that gets you is smooth normal driving and poor handling of emergency situations. People have tried using supervised learning for that - vision and human actions for training, steering and speed out. Works fine, until it works really badly, because it has no model of what to do in trouble.

    • mickronome 8 years ago

      While I understand the issues, it's still much better in the sense that it doesn't kill anyone while failing to learn how to drive. The fact that we fail to develop something without exposing people to risk doesn't create a right to expose people to risk. It should be seen as either an impassible obstacles, or motivation to solve the problem of learning safely.

      Some argume that it is okay, because it will decrease risks in traffic in the long run. This is not a valid argument to allow on road bug-testing, as there is a lot of medical research that we as a society don't allow because of ethical concerns, even some research where the risk of death is essentially zero. Applying research ethics to the Uber situation, Ubers vehicles would under no circumstances be allowed on the road until it could be proven that they were at least as safe as all vehicles already operating on the road.

      So while the technique suggested might not work well to solve the problem of safe autonomous cars, the more dangerous alternatives should absolutely not be allowed.

  • c06n 8 years ago

    > Software should try to learn from human drivers

    Yeah, that doesn't work though. Basically because you would need to have an excellent situation representation to really understand the drivers' reactions to outside events. But that does not exist.

    Perception and situation representation are key to mastering the driving task, and they both differ greatly between humans and machines.

    • Silhouette 8 years ago

      Basically because you would need to have an excellent situation representation to really understand the drivers' reactions to outside events. But that does not exist.

      This is the #1 reason I'm sceptical about self-driving cars becoming ubiquitous any time soon. Clearly they have potential advantages over a human driver in terms of not being tired or distracted, better sensors and better reaction times, but their judgement in any given situation will always be a function of some predetermined inputs. It's a brute force approach.

      Until a self-driving car can recognise a pub door opening around throwing-out time where a drunk patron is about to stumble out into the road from a hidden position, or that it's about to pass a park and a nearby school just finished for the day so kids will be kicking balls around and running across the road to join their friends, or that the recent weather conditions make black ice likely and the cyclist it's about to pass doesn't seem very steady, and take corresponding actions to reduce both the risk and the consequences of a collision, it's going to take a lot of brute force to outperform an experienced and reasonably careful human driver.

      In short, reacting to an emergency 100ms faster than a human driver is good, but sufficient situational awareness and forward planning that you were never in the emergency situation in the first place is better.

      • ThrustVectoring 8 years ago

        It doesn't need to outperform an experienced and reasonably careful human driver to be a significant net benefit to society. All it needs to do is get the most dangerous 10-20% of the population out from behind the wheel.

        • Nomentatus 8 years ago

          It's occurred to me lately that we probably need to get autonomous vehicles on the road even a bit earlier than this, since delaying how quickly the safety learning curve is mounted has a long-term cost in lives, too.

      • ryandvm 8 years ago

        Bingo. This is the difference between "intelligence" and "artificial intelligence". AI as we know it today is pattern recognition. There is no ability to form even the most basic of concepts. They may have incredible sensors, but these driverless cars hardly have the intelligence of a dragonfly.

        • perfmode 8 years ago

          I challenge you to prove that you are capable of more than recognizing patterns.

          • ryandvm 8 years ago

            Okay, for the sake of argument let's suppose that I was clever enough to come up with the famous Grandfather Paradox of time travel. Certainly no one has observed such an event before - or anything like it. I would posit that it takes more than pattern recognition to build the necessary mental concepts to design, much less understand, such a thought experiment.

            In fairness, I will concede that pattern recognition is crucial to intelligence. I would clarify my earlier position by saying that pattern recognition alone can only get you so far. Intelligence is the process of taking data, recognizing patterns, constructing multi-layered concepts and models from those patterns, and then being able to simulate and extrapolate potential outcomes based on varying inputs.

            The inability of modern AI to build complex concepts from they patterns they observe is why I know to slow down my car on a Halloween night in a residential neighborhood and a self-driving Uber doesn't.

            • perfmode 8 years ago

              Your Halloween example is you recognizing patterns:

              - on Halloween there are more children on the road than usual - children are more likely to disobey traffic laws

              It’s quite possible that waymo cars are already capable of this.

      • spacehome 8 years ago

        You might perform well in those scenarios, but a lot of people in many parts of the world do not.

  • stouset 8 years ago

    At some point you still need to switch roles. We’re at that point.

  • carlsborg 8 years ago

    Agree. Uber should have fitted their taxis with data devices.

aaroninsf 8 years ago

Surprising no one.

"win-at-any-cost" and "second place is first looser" (sic) do not cohere with safety.

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection