NTSB ‘unhappy’ with Tesla release of investigative information in fatal crash
washingtonpost.comThe issue is that this investigation is important to Tesla but not considered 'Major' in NTSB definition. In a major investigations, such as airline crashes, the NTSB are the lead agency who give press multiple press conferences and release preliminary information to the media.
It is considered very bad form for any designated parties[0] participating to talk to the media other than with NTSB approval. Non-essential organizations like union representatives have been booted from investigations for talking to the media.
[0] https://www.ntsb.gov/investigations/process/Pages/default.as... (scroll down to "The Party System")
Both Uber and Tesla try to discredit the victim before any official investigation ends, and in both cases it’s quite clear they will try to cover their asses first. And they have all the data. How can we be sure they don’t selectively hide something or don’t tamper with the records before giving them to investigators?
Uber has said essentially nothing about their crash, it's the cops who rushed out and got nearly everything wrong. Uber clearly had a serious technical failure, but there's been no PR spin from them.
I think it is possible that the Tempe police chief and mayor anticipated the incident could be a problem for them, especially given the secret agreement that later came out, and chose to try to spin the story on their own account. It is not hard to imagine that, in that frame of mind, they might be receptive to someone from Uber saying "clearly this was unavoidable", but I have no way of knowing whether any such conversation took place.
Then there is the question of whether the Uber-supplied video accurately represents the lighting conditions at the time... This may seem unduly conspiratorial, but I gave both Uber and the Tempe administration the benefit of the doubt until it became clear that the initial reports were innaccurate and less complete than they could have been.
It seems like the dash cam video released is very misleading there's been a couple people who've driven through the same area at night and it is well lit. [0] From the Uber cam it seems like the light is still on because there's a bright glow about where streetlights are in the other videos. It's kind of shocking just how bad the dash cam of the incident actually is unless the lights were recently (because there's still the glow) off for some reason before dawn (poorly implemented scheduling maybe?).
The difference between the Uber dash cam [1] and the one posted to youtube is stark [2]. It's certainly darker where she comes from but no where near impossible. Source the ArsTechnica article [3]
[0] https://youtu.be/CRW0q8i3u6E?t=32
[1] https://cdn.arstechnica.net/wp-content/uploads/2018/03/Scree...
[2] https://cdn.arstechnica.net/wp-content/uploads/2018/03/kaufm...
[3] https://arstechnica.com/cars/2018/03/police-chief-said-uber-...
> Then there is the question of whether the Uber-supplied video accurately represents the lighting conditions at the time...
Why would that matter? The issue here is that the LIDAR system failed to detect the pedestrian.
It's of tremendous importance from the perspective of public perception of who's at fault.
If the average observer watching the published video arrives at a conclusion of "well I would have hit that person too, she appeared out of nowhere in front of the car", it obviously matters.
It's not important at all. There's a bug in Uber's software. That's what we should be talking about.
It does seem that there is a problem with the vehicle's lidar or the associated software, so does it not strike you as strange that the story being pushed claimed that the victim "came out of the shadows", which is a misleading irrelevance if lidar is the primary sensing technology? Especially as the unrealistically dark video does not even seem to fit that story.
Yes. That's why I am posting in this thread.
Sorry, I misunderstood your point.
No you're right, the police happily pointed out that the woman was homeless and seemed to blame the victim. Why would they do that?
It's reasonable to assume it's to keep Uber happy.
> No you're right, the police happily pointed out that the woman was homeless and seemed to blame the victim. Why would they do that?
Because A.R.S says that pedestrians have a duty to yield outside of crosswalks and they normally make the police's accident report available shortly after any accident.
I know because I was over there not very long ago trying to get one for someone else. It's just off of Mill Ave., not very far south of where this accident happened.
> Because A.R.S says that pedestrians have a duty to yield outside of crosswalks and they normally make the police's accident report available shortly after any accident.
Regardless, it is normal to wait until an investigation is complete before you start making statements. Making premature statements actually makes the results of any investigation look suspect. Pretty dumb move.
Oh, and even if the woman is at fault that does not mean it is open season on pedestrians that happen to end up on the road.
Unless they're doing something special I haven't read about, that was the investigation by police. They don't spend a long time on every accident in my experience. What happened in our case was the driver turning left was cited for failure to yield. Case closed, next.
And I think a lot of people here don't have much experience driving these roads at night. More lights don't really help, there are too many lights on some level, you can see stoplights and such a good mile away, and pedestrians are moving shadows at night.
I had to train myself to notice them more after some weird experiences like the strange, uncoordinated bicyclist driving circles in the middle of a road for no reason in the middle of the night.
I'm sure there are things Uber and the safety driver could've done better, but I fully believe they really didn't notice them. That's right near an overpass and moving between lit and shadowed places also screws with your vision.
And FWIW, I've driven extensively here at night and I know that stretch of Mill Ave. rather well. I used to drive from Mill Ave to Van Buran, going through Papago park.
The police made those statements while the body was still warm, I highly doubt they could have properly investigated this particular accident in that time. It mostly looked like they were actively looking to pin the blame on the pedestrian somehow.
Note that as a part of the responsibility for a 2 ton piece of steel you get the expectation that you will do your best to keep the other traffic participants alive even when they break the rules, especially when they are more vulnerable than you are.
They saw a video that showed that she failed to yield and found a body that wasn't anywhere near a crosswalk. There was nothing more for them to investigate at that point, because the other points weren't relevant. So they were very likely done with their investigation at that point.
It was a horrific accident and there's pretty much always something someone could have done better, but as far as traffic laws go, it was her responsibility to make sure it was safe to cross. I get that you disagree with that and I can see where you're coming from, but the law says the duty was on her side.
Uber could, and should, do better than this. I believe the NTSB can (and should) demand that of them and everyone else, in fact. But the cops don't even enter into that. They're pretty much just going to figure out which traffic laws were broken and who had right of way.
There's a fair point that maybe they should be able to, I don't know, inspect the LIDAR sensors or something, but I don't think anything like that will be practical for at least a decade or two. To my knowledge, that should be up to the NTSB for now.
Not being near a crosswalk actually means in most places that you are allowed to cross there. You will obviously have to yield for other traffic but pedestrians crossing the road are an expected thing, wherever you drive. The vehicle braking or not braking would be one input into deciding whether the driver was under the influence and could shift liability from the pedestrian to the driver to some extent.
In lots of places a sobriety/drug test is mandatory after an accident with severe injuries or fatalities.
And if the driver were to be found not to have braked at all - or even to accelerate - the driver could very well lose their license. Even when you have the right of way you still have to behave like a responsible driver would.
If inspecting the full digital record by the police is not feasible then I would argue self driving cars have no business being on the road at all. After all, we require normal drivers to be witnesses to accidents as well, and we expect them to cooperate in tests to determine whether or not they were able to control their vehicle, especially in fatal accidents.
In this case one of the participants is dead and the other one is silicon so the only evidence taken is the same as if all participants had died and that's not true, at least one of them had a lot of evidence to give, and given the novel nature of the incident there was a very good reason to actually evaluate that evidence.
Being 'automated' should not be an automatic get-out-of-jail card with respect to your liability and your proven ability to control a vehicle, at least the same standards that apply to regular drivers should apply to automation.
I suspect they did check the human for inebriation, but I saw no indication the driver was drunk, so I expect that didn't matter. I honestly don't think they would have cited a human driver in this circumstance unless they left the scene of the accident or were drunk.
I think the problem people have here is that this was treated exactly like a normal accident and maybe they shouldn't have. But we have the NTSB to examine the engineering of the car, that's not something the cops are equipped to do.
The police did their check and released their report once it was done, which is a very standard practice. It can't have taken them that long--the police station where you get accident reports is on the other side of the Mill Ave. bridge from the accident.
I'm not terribly sure its Uber. It actually looks like something to do with keeping the Governor happy. https://www.theguardian.com/technology/2018/mar/28/uber-ariz...
It is important to distinguish that in the Uber case it was a fully self driving car that had a safety supervisor killing a pedestrian. The responsibility for the fatality was overwhelmingly Ubers. A small responsibility falls on the pedestrian who should not have been jaywalking and should have been looking. In the Tesla case it was not a fully self driving car and the driver who had responsibility to be attentive died. One would expect autopilot to do better and the road markings to do better, but the responsibility is still overwhelmingly with the driver.
Maybe we have a valid use case for a blockchain?
edit: can someone explain what’s so wrong with this comment please?
Blockchains do nothing to make the source of information trustworthy, they can only prove the information was created before a particular point in time. You could doctor a photo/video and put it on the blockchain just as easily as a real photo. You would only be able to tell it's fake if the original version was added first (if there's an original to begin with). There's a fundamental analog hole[1] when connecting a digital system to the real world.
If telemetry data hashes are regularly added to a blockchain, then we can be confident that the full-data set matches the hash at a later date, and are not purely dependant on the honesty of a proprietary manufacturer.
If there is a concern that the original data collected might be doctored... then this is valid, but also should be the focus of tests and legislation.
Ie,
The argument that not using a blockchain for storing referential data because it might be dishonest data doesn't strengthen the argument that we should purely trust the vendor, it makes it weaker!- Currently, speedometers are legally required to be within a certain tolerance and calibrated. - VW Diesel scandal, was discovered and hurt the manufacturer badly.(That said, I think this is pre-optimising and using blockchain for this today is overkill... but long term, I could very much see this being a valid use-case)
I see, people imagine that Tesla would use a defeat device to put doctored logs prior the crash. However this would not work since the car can’t put logs that say that everything is fine and did everything right because people would notice that’s incorrect as soon as they compare it with the evidence collected on the crash site.
If the car was able to know what’s the right thing to do, it would not only put it into the logs but it would have prevented the crash by doing it(unless of course, Musk is a contract killer and the cars ar crashing intentionnaly with correctly pre-doctored logs).
That’s why I suggested to use a blockchain - to ensure that the logs are not modified post-mortem.
Follow your thought through: what prevents them rewriting an alternative blockchain history?
Why can't they apply that to digital signatures with timestamps? This is a solved problem for years. You could use the NSTB as a timestamping authority.
Walk me through your implementation and I will walk you through the blockchain implementation and see the strengths and weaknesses.
You're just an internet person triggered by a question, really unpleasant attitude btw. Very closed even to considering an exotic solution and angry as if I'm implementing the blockchain at this very moment, are you in your 50s by any chance? Or maybe personal problems or no sex for a prolonged period or something like that? Why Are you acting like such an unpleasant and rude person?
Blockchain provides zero benefit in this case. People on HN are tired of people ignorantly (sorry) saying blockchain. You should state, for starters, why simple digital signatures are not enough.
No, you just lack imagination.
Funny, the article says several times that the NTSB is unhappy with Tesla's release of information, but it never says why. It's not clear how it can interfere with the investigation. Maybe they just want to control the narrative? But that is no part of their function. Sorry if you're unhappy about Tesla's disclosures guys, but why, and why should we care?
Because, if Tesla signed up as a "party" to assisting with the investigation, that puts them under certain nondisclosure rules.[1] At the end of the investigation all the info comes out, but not in the early stages. Selectively releasing information that makes some party look good is not allowed if you are involved with the investigation.
This is widely understood in the aviation community. The mission of the NTSB is not to assist with either litigation or PR.
"Contacts with news media concerning the investigation will be made only by the NTSB, through the Board Member if on-scene, the NTSB’s representative of its Office of Public Affairs, or the IIC. The guiding policy is that the NTSB is a public agency engaged in the public’s business and supported by public funds. The agency’s work is open for public review, and the Act under which it operates makes this mandatory. The NTSB believes that periodic factual briefings to the news media are a normal part of its investigation and that, for the public to perceive the investigation as credible, the investigation should speak with one voice, that being the independent agency conducting the investigation. Therefore, the NTSB insists that it be the sole source of public information regarding the progress of an accident investigation. Parties are encouraged to refer media inquiries to the NTSB’s Office of Public Affairs. In any case, release to the media of investigative information at any time is grounds for removal as a party."
[1] https://www.ntsb.gov/legal/Documents/NTSB_Investigation_Part...
If they are a party to the investigation, ignoring the non-disclosure rules makes Tesla seem to me like a dodgy company willing to act against established norms designed to ensure transport safety, and I hope they get the book thrown at them for it.
If they are not a party to the investigation, I'd question why not. When was the last time an aircraft manufacturer declined to be a party to the investigation? They recognise that if they get a reputation for being unsafe that has repercussions for future sales; I'd hope the same was true of car manufacturers!
To me, there's literally no way this makes Tesla look good.
Isn't that a lose/lose for Tesla?
Are not a party to the investigation -- motives questioned, discovering facts takes longer or impossible
Are a party to the investigation and information can only be released by NTSB -- share price gets hammered every time there's a crash and everyone else gets a chance to put out information
I'm inclined to lead towards "special circumstances" here. Does every Ford crash make national news?
> share price gets hammered every time there's a crash.
Instituting a more responsible testing program would both signal that they are dealing with the matter and also reduce the possibility of further deaths.
> Does every Ford crash make national news?
Those that indicate a major screw-up do, such as the Explorer rollovers (or Chevvy's ignition switch issue, for that matter).
I don't think the NTSB would be upset if the statements about these autonomous-vehicle accidents were purely factual, relevant and without self-serving commentary and innuendo.
Is the same not true for Boeing?
I'd argue Boeing (and most other companies) have the benefit of amortizating "fear" over the total installed base of similar technologies.
A Chevy engine turns out to have a design flaw, investors say "Yes, but Ford produces and sells tons of engines, so here's how much we think fixing it will cost."
It seems like the recent Uber / Tesla self-driving impact is more of the form of "Gee, maybe this isn't even possible."
If the V-22 Osprey tiltrotor were the only aircraft Boeing made... then I'd say it would be a more similar analogy.
"Move fast and break things", including non-disclosure agreements and established norms :/
Because Tesla should be a subject of the NTSB investigation, they should _not_ be a party to it, any more than a suspected criminal should be a party to their criminal investigation. Tesla should have no discretion over what data they provide to the NTSB. Calling them a "party" sounds like an excuse to control the information flow, not narrowly tailored to a legitimate state interest.
Tesla has an ethical and fiduciary duty to carry out their own independent investigation, to the extent that it doesn't interfere with the NTSB's. These organizations do not have the same interests. They don't have to be adversaries, but it's inappropriate for them to be partners. The public is better served by multiple independent investigations.
While I don't know about how this is percieved within Tesla, the adversarial relationship you paint between NTSB and Tesla is exactly what should not happen. Root causes of crashes must be investigated and published so that Tesla and their competitors can improve safety of their cars.
> the adversarial relationship you paint
There is space between being an adversary and being a partner in the investigation.
> Root causes of crashes must be investigated and published so that Tesla and their competitors can improve safety of their cars.
Yes, and two independent such investigations are better than one. With less opportunity for the subject to steer the result.
If the NTSB was truly focused on providing investigations that minimize future risk (this is separate from blame, responsibility or punishment), then it's pretty clear that it should ideally be able to work in as neutral of an environment as possible.
Tesla is also clearly trying to control the narrative, and I think NTSB's usual MO is that there simply be as little narrative as possible until something as close to ground truth be determined and released. NTSB understands that releasing reports into a pre-charged environment leads to increased risk of backlask against NTSB, thus reducing its capacity to minimize future risk.
Yeah, I have huge respect for the NTSB. I think they're a big reason air travel is so safe. You'd never see an airline energetically trying to assign blame like this, and I'm disappointed that Tesla's PR operation is spinning vigorously before the investigation is over.
Right. Aviation and the NTSB understand each other. Commercial aviation accidents seldom have a single cause, because decades of hard work have eliminated all the single causes of failure. It usually takes a whole chain of failures to bring down an airliner, and it takes long, complicated investigations to figure out exactly what happened.
Crash investigations tend to have conclusions like "A happened, and that would have been survivable except that B also happened, and the pilots were distracted dealing with A while B was the more serious problem." There's a whole field of "cockpit resource management" which deals with such issues.
The NTSB's job is not to assign blame. It's to understand exactly what happened and figure out how to keep it from happening again.
This crash is somewhat similar to the four other Tesla crashes where a stationary obstacle was partially obstructing the left edge of a lane. We know that Teslas will plow into such obstructions. Here's the area of 101 leading up to the crash.[1] Note the width of the space between the lines marking the gore area, the pointy section as the exit lane tapers off. It becomes a full lane wide, and widens very slowly. It's possible that the lane following system locked into the gore area as a lane, and followed it right into the barrier.
CALTRANS standards call for a sign in the gore area.[2] But drivers keep hitting them. Replacing them is dangerous work, because there's live traffic and not enough room for a block vehicle. Especially here, because this is a left exit designed for high speed. So one of the options is to put the sign overhead, well ahead of the split. That's what CALTRANS did here. Tesla's system, of course, does not understand such a sign.
Federal standards recommend striping in the gore area.[3] But CALTRANS does not usually do that. Probably because standard truck-mounted lane striping sprayers can't do it without shutting down the freeway.
I look forward to seeing NTSB's take on all this.
[1] https://www.google.com/maps/@37.4107387,-122.0752862,3a,75y,... [2] http://www.dot.ca.gov/trafficops/tcd/exit-gore.html [3] http://www.fdot.gov/roadway/ds/06/idx/17345.pdf
Thanks, that's very helpful. When I first looked at your link #1, I thought the gore area was a lane, so I can see how a machine would get that wrong.
But if it is prioritizing "I'm in a lane, so I'm cool" over "I'm hurtling toward a wall with a very visible marker" then clearly more work needs to be done. (Worse still if it can't recognize such a visible obstruction in the lane.) And, I'd say, should have been done before they were turning people loose with it.
That's Tesla. Here's the famous fatal crash in China.[1] No braking at all, right into a work vehicle operating in the left edge of a freeway. Tesla's system just doesn't detect big solid stationary obstacles. There are three other crashes with dashcams in similar situations.
My guess is that they are unhappy because Tesla has decided they hold no responsibility. They blame 1) the driver and 2) Caltrans (for not maintaining the median).
The NTSB doesn't like to speculate. They like to have solid facts, and they want the manufacturer to take blame if need be, so they can fix things.
Also they're interested in minimising risk in the future and providing recommendations to achieve that. It may well be the case that in a strict legal sense Tesla has no responsibility here, but from the NTSB's point of view that's irrelevant—the question is whether there's anything Tesla could do to minimise the risk of future incidents like this.
If Tesla relies so heavily on lane markers, someone may very well perform "a hit" on one or more Tesla owners by painting some adversarial lane lines. They could even tuned to the spectral response of the Tesla sensor pack.
I imagine maybe the NTSB feels that their neutral position is being undermined? NTSB gave Tesla the data recorder in order for them to help the investigation, but then Tesla took some of the data on it and used to make a press release to put some spin on the event---probably the NTSB feels they were played...
> “The driver had received several visual and one audible hands-on warning earlier in the drive"
This is completely irrelevant to the crash. Anyone who uses autopilot knows that throughout the drive even if you have two hands but don't apply enough pressure on the wheel, you will get a warning and you have to jiggle the wheel for it to recognize you're there.
Related, I wondered about this bit: "Tesla said Huang had not followed guidelines intended to ensure drivers are paying attention while the vehicle is in Autopilot mode."
Even if the sensors were correct, I have deep questions about the human ability to follow instructions requiring them to be robot-like. I would love to see some studies measuring the extent to which people can really follow Tesla's guidelines to the letter for the 300-500 hours/year that somebody with this commute would be doing.
I'm sure I'm an outlier, but I would personally never use a system like Tesla's Autopilot. I already think highway driving is slightly too boring to hold my attention, so on long drives I always supplement with podcasts and audiobooks. Until I can lie down and take a nap, I'm sticking with manual driving.
I have just seen one study (not exhaustive) that goes into some details about Tesla autopilot [0]
>Participants emphasized being alert at all times, paying attention to the road environment and keeping hands on the wheel while in autonomous driving mode.
...
> Drivers seem to enjoy these technologies, and are aware of the limitations of Autopilot and Summon. In the comments, we observed that drivers were highly motivated to use these technologies safely and have not seen indications of the concerns raised in the past such as engaging with secondary tasks while using Autopilot.
It wasn't obvious to me from those quotes: this study is an analysis of an online survey the authors posted in Tesla forums and I suppose on Twitter. It's not based on anything like neutral observation of real behavior.
"We conducted an online survey with 162 Tesla Owners. The survey was distributed through online forums and social media during April-May 2016. The survey asked questions about drivers’ attitudes towards and experiences with two functionalities built into Tesla Model S cars: Autopilot and Summon. Questions covered frequency of use, satisfaction, ease of learning and knowledge related to Autopilot and Summon. Additionally, we asked participants to report unusual or unexpected behaviors they experienced while using these systems and what they consider a key aspect of safety. The average time to complete the survey was 9.6 minutes."
From the abstract alone (can't access the full paper), that study seems indistinguishable from a Tesla PR puff piece.
The guidelines are pretty simple: When you engage autopilot it reminds you: Keep your hands on the wheel: Be prepared to take over at any time.
They do for legal reasons. Are you saying autopilot is no better than adaptive cruise control and lane assist found on many cars - Not the impression they want you to have at https://www.tesla.com/autopilot ...
Sure. And I'm skeptical that's really enough to keep most humans alert enough to actually be effective at taking over at any time.
Most users will say it's fine, of course. And, as here, Tesla will certainly use it to quickly blame the driver in an accident. But I'd like to see objective measures of attention compared over the long term.
You can't ask a human to do that. It's like selling a gun with a trigger that pulls if you wave at it.
Didn't work for trains, now did it?
Exactly. Train drivers have whole protocols to counter attention issues. And they are trained professionals.
A tape repeating "put your hands back on the instruments" can only buy you that much safety.
> This is completely irrelevant and false.
I, for one, would be generally dubious about making such a strong statement against something the NTSB has stated: it may well be the case that the system as designed should work as you describe, but why do they believe otherwise? Was there some flaw in the system?
Call me cynical, but it sounds like designing the wheel sensors to tend toward not sensing hands on, increases the chance the company can use the narrative being used "driver received hands-on warning and (apparently) ignored them" etc.
False negative seems likely. False positive seems unlikely, maybe nearly impossible.
Tesla is the company you pay $100,000 to kill you and then make you look bad in public.
They're not doing a very good job of delivering on that then. They don't seem to have anywhere near a high enough fatality rate.
Autopilot is only $10,000, but you are still supposed to be attentive.
It’s early, but it seems likely there was an issue with Autopilot that contributed to this crash. To make matters worse, the owner apparently complained about errant behavior on that section of road. Why did he continue to use it without paying attention and why didn’t Telsa act much earlier?
It seems the local government or highway agency also neglected their duty to maintain the highway safety barrier, a shockingly regular occurrence where I live as well. I’ve wondered how often someone is injured because they failed to repair a barrier for several months.
It appears all the pieces fell into place at the right time and this man unfortunately lost his life.
> It seems the local government or highway agency also neglected their duty to maintain the highway safety barrier, a shockingly regular occurrence where I live as well. I’ve wondered how often someone is injured because they failed to repair a barrier for several months.
While the crash attenuators should exist and the various responsible authorities should maintain them appropriately, I find it frustrating that this is brought up in this conversation as if it's a significant factor. It might have saved this man's life, but this crash was sure to be incredibly violent with or without the barrier.
The existence of a crash attenuator could not and should not affect anyone's decision making that led to the car impacting the barrier. Not the driver, not Tesla, not autopilot.
I hope the NTSB comments on this and it leads to Caltrans doing a better job of replacing these quickly (if they haven't already committed to this in the aftermath of this incident), but I also hope that it has zero bearing on the rest of the report.
I understand what you’re saying completely. The point of highway safety equipment is usually as a last resort to minimize injury as much as possible, this is after markings, signage, drivers and vehicles have failed to prevent a crash.
Obviously the crash should’ve been avoided, but poorly maintained or designed infrastructure should not be left out of the conversation.
I think it's best to split these in to two 'investigations' to talk about. Yep the barrier probably contributed to the death but are we really interested in talking about? We're here to talk about the self-driving bit! We wouldn't be talking about it if it was a 2002 Toyota Camry that hit it..
There’s no need for there to be two investigations, the NTSB generally produces very thorough reports that will acknowledge all causes leading to injury or death.
I'm not saying there should be two investigations, I'm saying there should be two 'investigations' for the purposes of us talking about it.
Nobody here is actually interested in the case because of the implications for roadside maintenance - they're interested because it's a (semi) autonomous vehicle.
The gore point gets hit multiple times a year and you would think that CalTrans could at least put safety striping. CalTrans has the statistics to show that it needs better visual cues.
Why don’t they do this, like in other well-planned places:
Does autopilot lead to overconfident drivers?
Background: My wife drives a Tesla Model S with AP.
Inattentive drivers more than overconfident drivers. You look down and stare at your phone for 10 seconds in a normal car and you are punished pretty quickly and learn not to do it.
You look down and start at your phone for 10 seconds in a Tesla with AP and "nothing bad happens" ... almost all of the time.
And that's the problem with this version of AP. Yes, Tesla says keep your eyes on the road. Yes, Tesla says keep your hands on the wheel. But it's pretty easy to get lax and start to slide.
For the record, I think her use is the one valid use. She has RSI issues with her hands and arms and she does a good bit of expressway driving. She absolutely keeps her eyes on the road and hands near the wheel when using it. But I bet she's in the minority of regular Tesla AP users.
The problem with Tesla’s ‘autopilot’ is that it’s anything but.
Asking drivers to keep their eyes on the road and hands on the wheel while not steering guarantees that their attention will wander, because their brain isn’t getting enough stimulus to keep focused on the task.
I don’t know why it’s not clear to most people by now. The current Tesla ‘autopilot’ is simply more dangerous than manual driving because it harms human reaction time during emergencies.
Tesla is using legalese to blame people for this fully predictable effect when crashes do happen, but I suspect it’s only a matter of time before they’re forced to rebrand Autopilot as a lane assist technology which is all it is. Its only use as a safety system is to maintain control of the car is the driver becomes incapacitated, and safely bring it to a complete stop.
> Asking drivers to keep their eyes on the road and hands on the wheel while not steering guarantees that their attention will wander, because their brain isn’t getting enough stimulus to keep focused on the task.
From my experience, it's no different from your run-of-the-mill cruise control systems requiring, but not enforcing, that you keep your foot hovering above the brake pedal. I don't feel it's more dangerous than standard cruise control systems.
> Asking drivers to keep their eyes on the road and hands on the wheel while not steering guarantees that their attention will wander, because their brain isn’t getting enough stimulus to keep focused on the task.
Completely agree! When I'm driving her Tesla (no RSI issues here) I much prefer to use the adaptive cruise (which is really nice) and do my own steering for exactly the reason you cite. I have to pay attention anyway and steering keeps me from getting bored. Plus I do things like, you know, pass and keep myself from getting into bad situations that are many seconds ahead.
I only use AP to demonstrate it as a party trick to folks who have not seen it before. I half agree with the "use crowd sourcing to get AP training sets" but I completely disagree with how the thing is currently marketed -- even if (as you say) the legalese tries to cover Tesla.
Don't get me wrong: Her Model S is a spectacular car. I love it. But AP is somewhere between a party trick and a death trap unless, like my wife, you have serious problems holding the wheel for hours and are therefore happy to sit there and "supervise" what is a bad expressway autopilot.
I am curious how do aircraft pilots got around this? Autopilot on aircraft work same as a Tesla drive a straight line where I aimed you have way less interaction than on the car. Yet pilots are able to take over autopilot and their responsibility during it's use is mostly looking out the window and occasionally switching frequency
There are two critical differences.
1: The handover latency (time from AP requesting handover to time pilot takes over) is measured in seconds to tens of seconds. AP is designed to give up a long time before any possible issues occur. Contrast this with cars on roads where the reaction times need to be in the sub-second range to avert crashes. If AP took a plane into terrain during poor visibility conditions and the pilots only got a second or two of terrain warning prior to a crash, such a crash would never be classified as pilot error on those grounds. Contrast this with self-driving cars where the autonomy frequently doesn't give up at all and the driver's awareness of the situation is the only thing to save them.
2: There are two operators on controls at all times. Recognising the limitations of human attention spans is one of but not the only reason for this being a requirement in civilian airlines.
Boeing has a whole design philosophy about making the operations of AP completely transparent to the pilots and failsafe. That means that all key controls (thrust, trim, stick, etc.) in the cockpit are physically manipulated by the AP so the pilots can see exactly what's going on. and more importantly that the controls represent the exact state of AP when the pilot takes over, so there are no unexpected sudden changes in input. The current generation of self-driving cars is a joke compared to the safety engineering that goes into AP systems.
One key difference is how much time operators have to figure out what's going on before action is necessary. If an aircraft autopilot fails, time before crash is likely measured in minutes, so the pilot taking 10 seconds to snap back into pilot mode still gives a pretty good likelihood of a favorable outcome. In a car, 10 seconds of no or bad driving on the part of the autopilot is pretty likely to cause a collision.
Even so, it's been implicated in crashes, notably Air France 447.
I think the big differences are that:
* pilots have thousands of hours of training/practice
* there are 2 of them
* there is a lot less traffic in the sky / not making a turn every km
> Asking drivers to keep their eyes on the road and hands on the wheel while not steering guarantees that their attention will wander, because their brain isn’t getting enough stimulus to keep focused on the task.
This is in HCI 101; we learned this by air traffic controllers crashing planes.
Supposedly any driver aid can make drivers more careless...
From Feb'18
https://www.carcomplaints.com/news/2018/insurance-company-su...
The insurance company says that despite its suggestive name and marketing campaign, "Tesla produced a semi-autonomous vehicle that misleadingly appeared to be fully autonomous."
In addition, the lawsuit claims Tesla advertised the package as providing a way to “automatically steer down the highway, change lanes, and adjust speed in response to traffic," all without requiring the driver to touch the steering wheel.
Given the fact that the autopilot warnings were ignored, could this possibly be a case of sudden driver incapacitation? Eg falling asleep at the wheel, a heart attack, etc?
The autopilot warnings happened earlier in the drive and aren't even relevant to the crash, they're Tesla's smokescreen.
(EDIT: This is why the NTSB is mad, Tesla is selectively releasing information like this, so they look good before the NTSB reaches any conclusion)
I learned everything I needed to know about Tesla's culture and Elon Musk specifically after their response a few years ago to the NYT's review criticizing the Model S's cold weather performance. All their subsequent responses to criticism haven't changed my impression at all. 'Lying with statistics' is a go-to Tesla PR move.
Even more deviously, it's "lying with true statistics".
In ancient time, it was by "the word of god" from prophets by which you controlled the masses.
Now it is statistics.
Given the media circus that started what choice did Tesla have? If this was really an problem why didn't they make Tesla do a NDA?
I respect the NTSB, but I do not like institutional secrecy aspects to try to muzzle Tesla - facts like this can help people from placing excessive reliability on self driving abilities of Tesla's autopilot - as long as personal privacy is respected. After all, calling it an autopilot is intrinsically wrong - as Tesla repeatedly asserts.
Is it institutional secrecy or is it disapproval of unilateral disclosure of only a sub-set of elements of an ongoing inquest?
What is the distinction between the two that changes the justification? I see those things as part of each other, as in institutional secrecy is bred from ongoing inquests in perpetuity.
The NTSB has a 50+ year history of producing detailed reports on accidents:
https://www.ntsb.gov/investigations/AccidentReports/Pages/Ac...
The notion that they'd suddenly turn secretive now seems absurd to me. Given their expertise and excellent track record, I think it's reasonable to trust them when they say they want to completed the investigation and publish a proper report, just like they do with other accidents.
Just to clarify, "secrecy" is not a word to which I attach negative sentiment.
> What is the distinction between the two that changes the justification?
That one is a form of bias whereas the other is the opposite of bias: you don't want elements of an ongoing inquest to be released independently because they don't provide a complete picture of the investigative results, and thus provide a biased outlook on the event.
> I see those things as part of each other, as in institutional secrecy is bred from ongoing inquests in perpetuity.
NTSB inquests have never been "ongoing in perpetuity". They provide thorough and extensive public reports.
Their entire purpose is to minimise future risk and they're very good at that. The point of discretion until all the facts are in (what you insultingly call "institutional secrecy") is that until as much as possible is known, there may be a major piece missing from the data which changes everything.
What has Tesla concealed - as you intimate? What does the NTSB know they are not telling? One hopes their report at the end will tell all.
I think the idea is we don't know what they're not telling us because they haven't told us what they're not telling us.
And even they might not know that they're "not telling us", because the investigation is ongoing and who knows what'll be uncovered?