Settings

Theme

Tesla reports another Robotaxi crash

electrek.co

155 points by hjouneau 2 months ago · 164 comments

Reader

Veserv 2 months ago

The most damning thing is that the most advanced version, with the most modern hardware, with perfectly maintained vehicles, running in a pre-trained geofence that is pre-selected to work well [1] with trained, professional safety drivers, with scrutinized data and reporting average a upper bound of 40,000 miles per collision (assuming the mileage numbers were not puffery [3]).

Yet somehow they claim that old versions, using old hardware, on arbitrary roads, using untrained customers as safety drivers somehow average 2.9 million miles per collision in non-highway environments [2], a ~72.5x difference in collision frequency, and 5.1 million miles per collision in all environments, a ~175x(!) difference in collision frequency, when their reporting and data are not scrutinized.

I guess their most advanced software and hardware and professional safety drivers just make it 175x more dangerous.

[1] https://techcrunch.com/2025/05/20/musk-says-teslas-self-driv...

[2] https://www.tesla.com/fsd/safety

[3] https://www.forbes.com/sites/alanohnsman/2025/08/20/elon-mus...

[3.a] Tesla own attorneys have argued that statements by Tesla executives are such nonsense that no reasonable person would believe them.

  • tim333 2 months ago

    That's not really like for like. You are comparing "level 4" where the car is supposed to do everything to driver assist where the driver is supposed to take over if things go off track.

    I'm not sure what the guys in the taxis with their hands on the arm rest do. I guess they have a button that either stops the car or connects it to a remote control operator?

  • randoglando 2 months ago

    It's because of selection bias. In the older vehicles, customers won't turn on autopilot if they think it won't handle the situation. So, they turn it on highways and easier paths.

    • AnthonyMouse 2 months ago

      There is also another possibility.

      People often don't report minor accidents. Someone scrapes a pole without causing enough damage to hit their insurance deductible, are they going to file a police report? Mostly not. And then the older number had that in it and the newer one doesn't.

      But the number for human drivers works like the old number. They're dividing miles driven by reported accidents. On top of that, they're using the average -- by miles -- which isn't the same as the median, and in particular that will over-represent drivers who drive the most miles, who are disproportionately professional drivers.

      • fragmede 2 months ago

        Let's be real. The real bar is drunk humans. I can scream and yell and not be friends with anybody who would put other people in danger like that, and they'll still drive drunk. FSD is good enough that it'll detect the driver's fallen asleep and pull over and park the car. Tesla can't talk about that for obvious legal reasons, but that's already saved many lives. Unfortunately we can't know those stats for comparison and holy shit people need to not drive drunk, but DUI laws don't cure addiction.

kstenerud 2 months ago

There's a lot of editorializing going on. Now that the title has been restored, hopefully things calm down a bit.

Ultimately, Tesla has two problems going on here:

1. Their crash rate is 2x that of Waymo.

2. They redact a lot of key information, which complicates safety assessments of their fleet.

The redactions actually hurt Tesla, because the nature of each road incident really matters: EVERY traffic incident must be reported, regardless of fault (even if it's a speeding car from the other direction that hits another vehicle which then hits the robotaxi - yes, that's actually in one of the Waymo NHTSA incident reports). When Tesla redacts the way they've been doing, it makes it very difficult to do studies like https://www.tandfonline.com/doi/full/10.1080/15389588.2025.2... which show how much safer Waymo vehicles are compared to humans WHEN IT COMES TO ACTUAL DAMAGE DONE.

We can't get that quality of info from Tesla due to their redaction practices. All we can reliably glean is that Tesla vehicles are involved in 2x the incidents per mile compared to Waymo. https://ilovetesla.com/teslas-robotaxi-dilemma-navigating-cr...

  • mmooss 2 months ago

    The redactions also indicate they are hiding something and deprioritize safety below (whatever they are hiding). That is, it makes Tesla not trustworthy with something that risks life and limb.

  • orwin 2 months ago

    If you only count robotaxis and not all Tesla, isn't the crash rate 20X per driven mile? I remember doing the math a few months ago and finding 20, but I might be mistaken.

TheAmazingRace 2 months ago

When you have a CEO like Elon who swears up and down that you only need cameras for autonomous driving vehicles, and skimping out on crucial extras like Li-DAR, can anyone be surprised by this result? Tesla also likes to take the motto of "move fast and break things" to a fault.

  • jauntywundrkind 2 months ago

    I just find it distracting to pretend like we know exactly what albatross to hang around the neck of the problem here. While I do tend to think lidar is probably useful, I also think this isn't a solved shut knowable case where the lidar absolutely is essential & makes all the difference. Making assertions like this rests more assurity than I think can be granted, and harms the overall idea: that Tesla doesn't seem to have serious proof that their things are getting better, that they are more trustworthy.

    The data is just not there for us outsiders to make any kind of case, and thats the skimping out crucial baseline we need.

  • tim333 2 months ago

    I'm not surprised, more because there was info on Reddit that Tesla FSDs were having disengagements every 200 miles or some such in urban environments. Camera only probably could work in the future but seemingly not yet.

    • fragmede 2 months ago

      It's easy to argue that LIDAR is expensive and unnecessary, but radar has been a standard for luxury cruise control for decades so it's got a variety of OEM suppliers. Thus Tesla's lack of radar because of the CEO's ego is damnable. The problem with camera-only is fog. My human eyes don't penetrate fog. Radar does. Proving that camera-only is possible is stupid and ego driven and doesn't come from a place of merit and science and technology.

  • rich_sasha 2 months ago

    Musk's success story is taking very bold bets almost flippantly. These things have a premium associates with them, because to most people they are so toxic that they would never consider them.

    Every time when he has the choice to do something conservative or bold, he goes for the latter, and so long as he has a bit of luck, that is very much a winning strategy. To most people, I guess the stress of always betting everything on red would be unbearable. I mean, the guy got a $300m cash payout in 1999! Hands up who would keep working 100 hour weeks for 26 years after that.

    I'm not saying it is either bad or good. He clearly did well out of it for himself financially. But I guess the whole cameras/lidar thing is similar. Because it's big, bold, from the outset unlikely to work, and it's a massive "fake it till you make it" thing.

    But if he can crack it, again I guess he hits the jackpot. Never mind cars, they are expensive enough that Lidar cost is a rounding error. But if he can then stick 3d vision into any old cheap cameras, surely that is worth a lot. In fact wasn't this part of Tesla's great vision - to diversify away from cars and into robots etc. I'm sure the military would order thousands and millions of cheapo cameras that work 90% as well as a fancy Lidar - while being fully solid state etc.

    That he is using his clients as lab rats for it is yet another reason why I'm not buying one. But to me this is totally in character for Musk.

    • BrenBarn 2 months ago

      The fact that he's able to fake it until he makes it is a failure of our society. He should be impoverished and incarcerated.

      • randoglando 2 months ago

        He's a complicated figure. He has done so much good as well. EVs in the US and reusable rockets owe a lot to him. OTOH, so does the cesspool that is X.

        • cn-watch 2 months ago

          There are no elon musk museums. There are no elon musk hospitals. There are no elon musk centers for unwed mothers.

          How could the richest man in the world give so little back.

          • cosmicgadget 2 months ago

            It probably doesn't help but Elon himself is a center for unwed mothers.

          • AnthonyMouse 2 months ago

            Traditionally those are the things that happen once someone retires and starts contemplating where their money should go after they die.

            • Zigurd 2 months ago

              Bill Gates is still kickin'. There are credible independent estimates that his funding has saved tens of millions of lives that would've been lost to malaria, AIDS, and other diseases.

              Effective altruism and other New Age garbage pseudo philosophy can't hold a candle to that.

              • AnthonyMouse 2 months ago

                > Bill Gates is still kickin'.

                And he's retired, so the money is no longer useful to him for maintaining control over the company he runs or expanding it, which is when it traditionally starts going to charities.

              • oska 2 months ago

                In my opinion, one of the things that most reveals a person's biases and worldview is which tech oligarchs they revere and which they loathe

                To reveal my own bias / worldview, I loathe and detest Bill Gates in nearly every way and have done so for over three decades. I think he has had a massively negative impact on humanity, mainly by making the computer industry so shitty for 4+ decades but in other more controversial areas as well.

                With Elon Musk, while perceiving a number of big faults in the man, I also acknowledge that he has helped advance some very beneficial technologies (like electric vehicles and battery storage). So I have a mixed opinion on him, while with Gates, he is almost all evil and has had a massive negative impact on the planet.

                • BrenBarn 2 months ago

                  What if you think they're all evil?

                  • oska 2 months ago

                    Yeah, that's cool. I loathe almost all of them too (e.g. Zuckerberg, Altman, Hoffman, Ellison, etc)

                    I guess what I'm saying is that when people only fixate on one oligarch, which one they mostly focus on can be quite telling.

          • oska 2 months ago

            Let's just ignore Tesla's enormous positive health impact of replacing millions of polluting vehicles with zero emission ones

            • rich_sasha 2 months ago

              I'm conflicted on this one. Famously, Tesla's main revenue source for ages was selling green credits to other car makers. Presumably, if not for Tesla, these car makers would have had to do something else.

              The way I see it, he converted his cars' greenness into other people's fumes. So not a net win after all.

          • johnthewise 2 months ago

            How do you know? Maybe he doesn't name these Elon?

    • gitaarik 2 months ago

      It rather reminds me of that Musk was obsessed with converting Paypal to run on Windows servers instead of Linux, and that he therefore finally got ousted by the other CEOs. Because he already had a big share in the company he made a lot of money. But he doesn't seem to be a clever engineer.

  • DoesntMatter22 2 months ago

    Turns out Waymo hits a lot of things too. Why isn't Lidar stopping that?

    • TheAmazingRace 2 months ago

      Last I checked, Robotaxi has a safety driver, whereas Waymo is completely self driving, yet has a very good safety record. That speaks volumes to me.

      https://waymo.com/safety/impact/

      • themafia 2 months ago

        Completely self driving? Don't they go into a panic mode, stop the vehicle, then call back to a central location where a human driver can take remote control of the vehicle?

        They've been seen doing this at crime scenes and in the middle of police traffic stops. That speaks volumes too.

        • daheza 2 months ago

          Incorrect humans never take over the controls. An operator is presented with a set of options and they choose one, which the car then performs. The human is never in direct control of the vehicle. If this process fails then they send a physical human to drive the car.

          • themafia 2 months ago

            > humans never take over the controls

            > presented with a set of options and they choose one

            > they send a physical human to drive the car.

            Those all sound like "controls" to me.

            "Fleet response can influence the Waymo Driver's path, whether indirectly through indicating lane closures, explicitly requesting the AV use a particular lane, or, in the most complex scenarios, explicitly proposing a path for the vehicle to consider. "

            https://waymo.com/blog/2024/05/fleet-response/

            So they built new controls that typical vehicles don't have. Then they use them. I fail to see how any of this is "incorrect." It is, in fact, _built in_ to the system from the ground up.

            Semantic games aside, it is obviously more incorrect to call them "completely self driving" especially when they "ask for help." Do human drivers do this while driving?

            • Mawr 2 months ago

              I don't know what you're trying to prove here. Stopping safely and waiting for human input in edge cases is fine (Waymo). Crashing into things is not fine (Tesla).

    • mmooss 2 months ago

      What is Waymo's accident rate? (Edit: Tesla's is in the article, at least for that region.)

      • tasty_freeze 2 months ago

        and there is a linked article about Waymo's data reporting, which is much more granular and detailed, whereas Tesla's is lumpy and redacted. Anyway, Waymo's data with more than 100M miles of self-supervised driving shows a 91% reduction in accidents vs humans. Tesla's is 10x the human accident rate according to the Austin data.

  • guywithahat 2 months ago

    The more I've looked into the topic the less I think the removal of lidar was a cost issue. I think there are a lot of benefits to simplifying your sensor tech stack, and while I won't pretend to know the best solution removing things like lidar and ultrasonic sensors seem to have been a decision about improving performance. By doubling down on cameras your technical team can remain focused on certain sensor technology, and you don't have to deal with data priority and trust in the same way you do when you have a variety of sensors.

    The only real test will be who creates the best product, and while waymo seems to have the lead it's arguably too soon to tell.

    • _aavaa_ 2 months ago

      Having multiple sources of data is a benefit, not a problem. Entire signal processing and engineering domains exist to take advantage of this. Even the humble Kalyan filter lets you combine multiple noisy sources to get a more accurate result than would be possible if using any one source.

      • guywithahat 2 months ago

        What I've heard out of Elon and engineers on the team is that some of these variations of sensors create ambiguity, especially around faults. So if you have a camera and a radar sensor and they're providing conflicting information, it's much harder to tell which is correct compared to just having redundant camera sensors.

        I will also add in my personal experience, while some filters work best together (like imu/gnss), we usually either used lidar or camera, not both. Part of the reason was combining them started requiring a lot more overhead and cross-sensor experts, and it took away from the actual problems we were trying to solve. While I suppose one could argue this is a cost issue (just hire more engineers!) I do think there's value in simplifying your tech stack whenever possible. The fewer independent parts you have the faster you can move and the more people can become an expert on one thing

        Again Waymo's lead suggests this logic might be wrong but I think there is a solid engineering defense for moving towards just computer vision. Cameras are by far the best sensor, and there are tangible benefits other than just cost.

        • karlgkk 2 months ago

          Counterpoint: Waymo

          > solid engineering defense for moving towards just computer vision

          COUNTERPOINT: WAYMO

          • fragmede 2 months ago

            We don't know enough about the internals of either one of them to make a judgement. The only one that is, is Comma.ai.

            • karlgkk 2 months ago

              Waymo just reached 20 million public unsupervised rides. When will it be validated enough for Tesla fanboys? (Answer: never)

              • fragmede 2 months ago

                Fanboy or not, we don't know how much Waymo's model relies on an army of contractors labeling every stop light, sign, and trash can that, sure, they're using LIDAR to detect them and not cameras. We also don't know much about Tesla's Robotaxi initiative and how much human help they're relying on either.

                • karlgkk 2 months ago

                  first of all, their approach does not rely on map level labeling for runtime. They do that for training, but so does every other player. High precision maps are used as a form of GPS in their solution. They also use it to determine delta if road conditions change and alert ops.

                  Second of all, they’re using cameras to detect things like signs and trash cans! I don’t know where this misconception came from that Waymo only uses lidar, but it demonstrates a lack of familiarity with the problem space and the solutions

                  • karlgkk a month ago

                    i cant edit this but it's bugging me "does not rely on map level labeling for runtime" isn't quite true. they do use tagging in their centralized mapping that the waymo driver uses as one source of truth. but ultimately, the onboard sensors and the onboard classifier and pathing engine are used.

                    this allows them to tag things that their cars are not recognizing, but then are fed back into the training set and validated. it allows them to "live patch" troublesome sections of road but it's not intended for long term use.

                    they also use this to close sections of road, such as for car accidents, flooding, etc.

          • guywithahat 2 months ago

            From my previous comment, in case you didn't see it

            > Again Waymo's lead suggests this logic might be wrong but I think there is a solid engineering defense for moving towards just computer vision. Cameras are by far the best sensor, and there are tangible benefits other than just cost.

          • groundzeros2015 a month ago

            Waymo could be piloting cars remotely with an operator per-vehicle for all we know.

        • bulbar 2 months ago

          Your logic is correct, however these challenges can be solved and then you get synergy effects from using different sensors.

        • ambicapter 2 months ago

          I don’t understand how running into difficulties when trying to solve a problem can be interpreted as “[taking] away from the actual problem”.

          • guywithahat 2 months ago

            In our case if we're spending a lot of time on something that doesn't improve the product, it just takes away from the product. Like if we put 800 engineering hours into sensor fusion and lidar when the end product doesn't become materially better, we could have placed those 800 hours towards something else which makes the end product better.

            It's not that we ran into problems, it's that the tech didn't deliver what we hoped when we could have used the time to build something better.

      • fpoling 2 months ago

        Kalman filters and more advanced aggregators add non-trivial latency. So even if one does not care about cost, there can be a drawback from having an extra sensor.

        • _aavaa_ 2 months ago

          Yes, there are tradeoffs to be made, but having to reconcile multiple sensors is not intrinsically a negative.

          But also, if you didn’t get the right result, I don’t care how quickly you didn’t get it.

        • MadnessASAP 2 months ago

          The latency from video capture and recognition is going to be so significant that it makes all other latency sources not even worth mentioning.

        • Zigurd 2 months ago

          Cars and roads are built for human reaction times. That's why you have low speed limits in urban areas. You can have a pile of latencies attributable to processing a scene and still have superhuman reaction time that contributes to outperforming humans.

          It's analogous to communications latency. High latencies are very annoying to humans, but below a threshold they stop mattering.

    • solfox 2 months ago

      To tell what? Waymo is easily 5 years ahead of the tech alone, let alone the roll out of autonomous service. They may eventually catch up but they are definitely behind.

    • dyauspitr 2 months ago

      This is a solved problem. Many people I know including myself use Waymo’s on a weekly basis. They are rock solid. Waymo has pretty unequivocally solved the problem. There is no wait and see.

      • goosejuice 2 months ago

        Nevermind the Waymos rolling by stopped school busses.

        https://www.npr.org/2025/12/06/nx-s1-5635614/waymo-school-bu...

        • xp84 2 months ago

          This seems solvable, no? Not saying it isn’t really damn important, but those have stop signs and flashing lights. It seems like they can fix that.

          • goosejuice 2 months ago

            Solvable yes, but it's a perfect example of it not being solved yet despite this person's anecdotes.

            Engineering problems aren't limited to a single solution anyhow. Anyone ruling out a camera based solution has very little imagination.

            • dyauspitr 2 months ago

              Unless it’s a mathematical proof, solved usually means works in the overwhelming majority of cases. It’s solved for all intents and purposes.

              • goosejuice 2 months ago

                Waymo has barely seen the US. They just got on the highway and don't operate anywhere with snow.

                If that's your definition of solved, be my guest, but it's a pretty silly one.

          • xnx 2 months ago

            Indeed. The problem was already fixed.

        • dyauspitr 2 months ago

          You can cavil about this but it’s weak.

      • guywithahat 2 months ago

        I mean if waymo had unequivocally solved the problem the country would be covered in them, and the only limit to their expansion would be how many cars they can produce. Currently they're limited by how quickly they can train on new areas, which is likely slowed by the fact they're using over 20 sensors across four different types. On the other hand, Tesla could spread across the country tomorrow if they were reliable enough. I would think solving autonomous driving would imply you could go nation wide with your product

        • dyauspitr 2 months ago

          Tesla literally has a guy sitting in there.

          • guywithahat 2 months ago

            Right, nobody has solved it. If either company had solved self driving, it would be in basically every US city. While it is my opinion that Waymo is further ahead, no company has solved the problem yet and because of that it's still not clear what the best solution will be.

    • bulbar 2 months ago

      Sure, but Tesla is already losing the race. They were ahead a few years ago, but not anymore. They bet in getting autonomous driving done with cameras only that are cheap and have a simple and will understood tech stack and ecosystem.

      It didn't work out though and now multi sensor systems are eating their lunch.

    • cameldrv 2 months ago

      Honestly I think it's more that he was backed into a corner. The Teslas from ~9 years ago when they first started selling "full self driving" as an option, had some OK cameras and, by modern standards, a very crappy radar.

      The radar they had really couldn't detect stationary objects. It relied on the doppler effect to look for moving objects. That would work most of the time, but sometimes there would be a stationary object in the road, and then the computer vision system would have to make a decision, and unfortunately in unusual situations like a firetruck parked at an angle to block off a crash site, the Tesla would plow into the firetruck.

      Given that the radar couldn't really ever be reliable enough to create a self driving vehicle, after he hired Karpathy, Elon became convinced that the only way to meet the promise was to just ignore the radar and get the computer vision up to enough reliability to do FSD. By Tesla's own admission now, the hardware on those 2016+ vehicles is not adequate to do the job.

      All of that is to say that IMO Elon's primary reason for his opinions about Lidar are simply because those older cars didn't have one, and he had promised to deliver FSD on that hardware, and therefore it couldn't be necessary, or he'd go broke paying out lawsuits. We will see what happens with the lawsuits.

      • xnx 2 months ago

        > he was backed into a corner

        He "painted himself into a corner" is the more accurate expression when one is the cause of their own problem

    • vjvjvjvjghv 2 months ago

      Usually you would go in with the max amount of sensors and data, make it work and then see what can be left out. It seems dumb to limit yourself from beginning if you don’t know yet what really works. But then I am not a multi billionaire so what do i know?

      • fpoling 2 months ago

        Well we know that vision works based on human experience. So few years ago it was a reasonable bet that cameras alone could solve this. The problem with Tesla is that they still continue to insist on that after it became apparent that vision alone with the current tech and machine learning does not work. They even do not want to use a radar again even if the radar does not cost much and is very beneficial for safety.

        • jsnell 2 months ago

          Human performance won't be sufficient. Self-driving vehicles have to be noticably (order of magnitude) better than humans to be accepted.

        • bsder 2 months ago

          > Well we know that vision works based on human experience.

          Actually, we know that vision alone doesn't work.

          Sun glare. Fog. Whiteouts. Intense downpours. All of them cause humans to get into accidents, and electronic cameras aren't even as good as human eyes due to dynamic range limitations.

          Dead reckoning with GPS and maps are a huge advantage that autonomous cars have over humans. No matter what the conditions are, autonomous cars know where the car is and where the road is. No sliding off the road because you missed a turn.

          Being able to control and sense the electric motors at each wheel is a big advantage over "driving feel" from the steering wheel and your inbuilt sense of acceleration.

          Radar/lidar is just all upside above and beyond what humans can do.

        • vjvjvjvjghv 2 months ago

          Human vision is terrible in conditions like fog, rain, snow, darkness and many others. Other sensor types would do much better there. They should have known that a long time ago.

          • guywithahat 2 months ago

            To be fair, lidar is arguably worse in rain and snow. I don't think we know of a sensor yet which works well in these conditions

    • lotsofpulp 2 months ago

      >The only real test will be who creates the best product, and while waymo seems to have the lead it's arguably too soon to tell.

      Price is a factor. I’ve been using the free self driving promo month on my model Y (hardware version 4), and it’s pretty nice 99% of the time.

      I wouldn’t pay for it, but I can see a person with more limited faculties, perhaps due to age, finding it worthwhile. And it is available now in a $40k vehicle.

      It’s not full self driving, and Waymo is obviously technologically better, but I don’t think anyone is beating Tesla’s utility to price ratio right now.

    • gitaarik 2 months ago

      Seems to me rather that Teslas are self driving cars with a handicap; they are missing some easily obtainable data because they lack the sensors. Because their CEO is so hard headed.

      Simplifying things doesn't always make things easier.

bparsons 2 months ago

This is the sort of thing that occurs when the interests of the public become subordinate to the interests of a lawless aristocracy. Financial, social and public safety considerations are costs that can be transferred to the public to preserve the wealth of a few individuals.

  • themafia 2 months ago

    Was there a time when the interests of the public weren't subordinate?

  • cosmicgadget 2 months ago

    In the aristocracy's defense, we voted for this.

    • Zigurd 2 months ago

      Without taking money out of politics, the assertion that "we" are to blame has to assume advertising and influence peddling doesn't work.

      • cosmicgadget 2 months ago

        We can be subject to influence and still remain responsible for our own actions.

        • Zigurd 2 months ago

          Even if we had the perfect combination of public funding and low small dollar donation limits we would still need discernment and to take responsibility for our votes.

          Nevertheless, getting big money donations out of politics would be a big improvement.

jmpman 2 months ago

I’m still waiting until I see little X Æ A-Xii playing in the street while Tesla Robotaxis deliver passengers before I buy these arguments. Until then, my children are playing in the street while these autonomous vehicles threaten their safety. I’m upset that this is forced upon the public by the government.

  • dylan604 2 months ago

    This would imply you feel the parent of said kid cares about said kid more than parent's company.

    • awestroke 2 months ago

      At this point he's just an anxious wreck on ketamine fully trusting a broken gut feel in each and every situation

koinedad 2 months ago

The title makes it sound way worse than the 7 reported crashes listed in the article. I’d be interested to see a comparison with Waymo and other self driving technologies in the same area (assuming the exist).

  • phyzome 2 months ago

    Converting things to rates is how you understand them in a meaningful way, particularly for things that are planned to be expanded to full scale.

    (The one thing I would like to see done differently here is including an error interval.)

    • jsight 2 months ago

      Yeah, I'm glad that they are trying to do a rate, the problem is that the numerator in the human case is likely far larger than what they are indicating.

      Of the Tesla accidents, five of them involved either collisions with fixed objects, animals, or a non-injured cyclist. Extremely minor versions of these with human drivers often go unreported.

      Unfortunately, without the details, this comparison will end up being a comparison between two rates with very different measurement approaches.

      • phyzome 2 months ago

        With Tesla redacting as much as they are, I think we have to assume the worst.

  • Rebelgecko 2 months ago

    I couldn't find Waymo's stats for all crashes in 12 seconds of googling, but they have breakdowns for "crashes with serious injury" (none yet) "crashes resulting in injury" (5x reduction) and "crashes where airbag deployed" (14x reduction), relative to humans in Austin

    Austin has relatively low miles so the confidence intervals are wider but not too far from what they show for other cities

  • tomhow 2 months ago

    We updated the title to the original. All, please remember the section of the guidelines about editorialising of title.

    Please use the original title, unless it is misleading or linkbait; don't editorialize.

    https://news.ycombinator.com/newsguidelines.html

  • rsynnott 2 months ago

    Given that there are only about 20 of these, 7 reported crashes seems _extremely high_.

    > and other self driving technologies

    I mean, this isn't self-driving. It has a safety driver.

ndsipa_pomu 2 months ago

How does it make sense that Tesla are allowed to redact information about an RTC on public roads? If the information is proprietary then keep the vehicle away from public roads until they stop crashing or the info can be released in the event of a collision.

hnburnsy 2 months ago

Search Google news and just this week and you will see Waymo blocked a parade for 45 minutes, drove into a police crime scene in LA, and two crashed into each other blocking a third. Stuff happens, but obviously Fred rarely reports the Waymo incidents but he is quck to write about every Tesla one.

https://www.yahoo.com/news/articles/two-self-driving-waymos-...

altairprime 2 months ago

Slap a STUDENT DRIVER bumper sticker on them so we can all give them space!

  • kevin_thibedeau 2 months ago

    It should say INDUSTRIAL ROBOT. You wouldn't willingly enter the hazard zone of a KUKA. Why should we casually accept them roaming free?

    • altairprime 2 months ago

      Most people don’t know that characteristic of industrial robots, so it wouldn’t inspire the wariness necessary to keep the novice robotic vehicles safe from us all assuming predictability of them. And those who do know what it means would assume predictable behaviors, since that’s what industrial robots have, which is doubly problematic since now they’ll be unprepared for SQUIRREL input, RED OCTAGON output neural network misfirings.

    • 0_____0 2 months ago

      If you're familiar with industrial hazard mitigation, looking at how roadways are constructed is kind of crazy making.

    • dzhiurgis 2 months ago

      Funny how one side of commenters scream these are roaming free and killing people while others say they are still crashing with all the babysitting by safety driver.

jsight 2 months ago

I spent a little bit of time poking at Gemini to see what it thought the accident rate in an urban area like Austin would be, including unreported minor cases. It estimated 2-3/100k miles. This is still lower than the extrapolation in the article, but maybe not notably lower.

We need far higher quality data than this to reach meaningful conclusions. Implying conclusions based upon this extrapolation is irresponsible.

  • mmooss 2 months ago

    I don't understand how Gemini's fabrication has any validity. What is it based on?

    • jsight 2 months ago

      It is at least as reliable as the data in the electrek article. My point is that the data naturally has error margins that are clearly large enough to make drawing concrete conclusions impossible.

      • mmooss 2 months ago

        > It is at least as reliable as the data in the electrek article.

        I don't see why you say that.

        > My point is that the data naturally has error margins that are clearly large enough to make drawing concrete conclusions impossible.

        I don't understand this one either.

narrator 2 months ago

I can imagine why they redact the reports so much: Elon hating NGOs would gladly pay a lawyer to spend as much time suing Tesla for each crash, even if completely frivolously and with no hope of recouping any of the time and money spent, and think they were doing the great work of social justice.

dubeye 2 months ago

one minor hospitalization in half a year, probably a sensible rollout pace.

rsynnott 2 months ago

At this rate, are there any left?

wizardforhire 2 months ago

How many more people have to die?

In the past it took a lot less to get the situation fixed… and these were horrendous situations! [1][2] And yet tesla is a factor of 10 worse!

[1] https://en.wikipedia.org/wiki/Ford_Pinto

[2] https://en.wikipedia.org/wiki/Firestone_and_Ford_tire_contro...

93po 2 months ago

Electrek notoriously lies and fibs and stretches the truth to hate on Tesla and Elon as much as possible when it serves their own best interests.

This one is misleading both in that 8 "crashes" is statistically insignificant to draw conclusions as to its safety compared to humans, but also because these 'crashes' are not actually crashes and instead a variety of things, including hitting a wild animal of unknown size or potentially minor contact with other objects of unspecified impact strength.

They make other unsubstantiated and likely just wrong claims:

> The most critical detail that gets lost in the noise is that these crashes are happening with a human safety supervisor in the driver’s seat (for highway trips) or passenger seat, with a finger on a kill switch.

The robotaxi supervisors are overwhelmingly only the passenger seat - I've never actually seen any video footage of them in the driver seat, and Electrek assuredly has zero evidence of how many of the reported incidents involved someone in the driver seat. Additionally, these supervisors in the passenger seat are not instructed to prevent every single incident (they arent going to emergency brake for a squirrel) and to characterize them as "babysitting to prevent accidents" is just wrong.

This article is full of other glaring problems and lies and mistruths but it's genuinely not worth the effort to write 5 pages on it.

If you want some insight on why Fed Lambert might be doing this, look no further than the bottom of the page: Fred gives (sells?) "investment tips" which, you guessed it, are perpetually trying to convince people to sell and short Telsa: https://x.com/FredLambert/status/1831731982868369419

Feel free to look at his other posts: it's 95% trying to convince people that Telsa is going bankrupt tomorrow, and trying to slam Elon as much as possible - sometimes for good reasons (transphobia) but sometimes in ways that really harms his credibility, if he actually had any

Lambert has also been accused of astrotrufing in lawsuits, and had to go through a settlement that required him to retract all the libel he had spread: https://www.thedrive.com/tech/21838/the-truth-behind-electre...

That same source also touches on Fred and Seth's long history of swinging either side of the bandwagon in attempts to maximize personal gain off bullshit reporting. And basically being a massive joke in automotive reporting.

The owner of Eletrek, Seth Weintraub, also notably does the same thing: https://x.com/llsethj/status/1217198837212884993

  • Mawr 2 months ago

    Tesla is free to provide information that debunks these claims. They're the ones who redacted the details of the incidents in the first place.

    • 93po 2 months ago

      This is the state of accepted journalism now? Fabricate ridiculous claims and then make the target of your hit piece responsible for refuting it?

      • Veserv 2 months ago

        Wait, are you talking about Tesla? Since they are the ones who fabricated ridiculous claims like old versions of FSD using old hardware on arbitrary roads using untrained customers as safety drivers average ~5 million miles per collision and are thus ~2-7x safer than human drivers. Given that they present no credible, auditable evidence for that claim following your logic it should be unnecessary for anybody to refute their ridiculous claim and their systems can not be demonstrated to be safe despite billions of miles.

        Despite that, the article and the public (the target of the hit piece that encourages people to endanger themselves with a system that has not been demonstrated to be safe with the direct intent of enriching the owners of Tesla) directly refute Tesla's ridiculous claims demonstrating they are off by multiple orders of magnitude using basic mandatory data reporting for their Robotaxi program which is using systems more advanced, fine-tuned, geofenced, with professional safety drivers (thus we can only reasonably assume that their normal system is worse), but which actually has scrutinized reporting requirements.

        And yet now you argue that the entity fabricating ridiculous claims for their own enrichment, Tesla, is not only not responsible, but target of the hit piece, the ones that clearly and debunked Tesla's claims as deceptive, are not only responsible for refuting it but are responsible for demonstrating a level of rigor that is unimpeachable when the original fabricated claim lacks even the elements of rigor we expect out of your average middle school science fair, let alone a literal trillion dollar company.

        Talk about double standards.

  • smoovb 2 months ago

    So Seth Weintraub sold $TSLA at $35 a share in Jan 2020. Today $467. Then Seth missed out on gains of 1,200%. And Fred selling in Sep, 2024 missed out on 100% gains.

perrohunter 2 months ago

I'm so tired of this guy being so misleading, I used to read this blog because he had tesla news, now all he tries to do is hate on tesla

  • panarky 2 months ago

    Better to spell out exactly what you think is misleading than to go for the ad hominem.

7e 2 months ago

Who will save humanity from Elon Musk?

thomassmith65 2 months ago

  With 7 reported crashes at the time, Tesla’s Robotaxi was crashing roughly 
  once every 40,000 miles [...]. For comparison, the average human driver 
  in the US crashes about once every 500,000 miles. 
  This means Tesla’s “autonomous” vehicle, which is supposed to be the future of safety, 
  is crashing 10x more often than a human driver.
That is a possible explanation for why Musk believes in people having 10x as many children. /s
natch 2 months ago

Most minor fender benders are not reported by the involved people, whereas even the most minor ones often caused by other humans must be assiduously reported by any company doing such a rollout.

A responsible journalist with half a clue would mention that, and tell us how that distorts the numbers. If we correct for this distortion, it’s clear that the truth would come out in Tesla’s favor here.

Instead the writer embraces the distortion, trying to make Tesla look bad, and one is left to wonder if they are intentionally pushing a biased narrative.

  • bryanlarsen 2 months ago

    Every 40,000 miles is every 2nd year for the average American. Every 500,000 miles is once in a lifetime for the average American.

    Using your own personal experience, it should be obvious that trivial fender benders are more common than once per lifetime but significantly less common than one every couple of years.

    • natch 2 months ago

      My household alone has had two fender benders in the past six weeks, one of which will not be reported (and, maybe not relevant, both the fault of other drivers). Zooming out in time they are less common but most are unreported. The big question would be whether the 40,000 number includes unreported incidents.

ajross 2 months ago

FTA: "For comparison, the average human driver in the US crashes about once every 500,000 miles."

Does anyone know what the cite for this might be? I'm coming up empty. To my knowledge, no one (except maybe insurance companies) tallies numbers for fender bender style accidents. This seems like a weirdly high number to me, it's very rare to find any vehicle that reaches 100k miles without at least one bump or two requiring repair.

My suspicion is that this is a count of accidents involving emergency vehicle or law enforcement involvement? In which case it's a pretty terrible apples/oranges comparison.

  • pavon 2 months ago

    This NHTSA report agrees with those numbers[1]. It reports 6,138,359 crashes and 3,246,817,000,000 Vehicle Miles Traveled in the US for 2023, which comes to about 530k miles per crash. The data comes from FARS which only reports fatalities, and CRSS which only includes crashes reported to the police[2]. It also only includes crashes on roadways (or from cars driving off roadways), not parking lots and other private property.

    [1] https://crashstats.nhtsa.dot.gov/Api/Public/ViewPublication/...

    [2] https://www.nhtsa.gov/crash-data-systems/crash-report-sampli...

    • ajross 2 months ago

      Again though, does that NHTSA report include "hit a garbage can" accidents or not? My strong suspicion is not[1], because they aren't required to be reported to anyone. So comparing it to an autonomy trial (which clearly is under such a requirement) isn't very informative. And putting it in the headline is straight up misleading.

      [1] Source: I've hit a garbage can and told no one. Until this moment.

  • habosa 2 months ago

    Yeah as much as I think that Tesla is full of shit, there’s no way this is true. I don’t know a single person that’s driven 500k miles lifetime but everyone I know has been in at least one minor accident.

    • bryanlarsen 2 months ago

      The average American drives more than 600k miles in a lifetime.

      • ajross 2 months ago

        Not to nitpick, but that means if you sample randomly then you're going to find that the great majority of Americans have, in fact, driven less than 500k miles in their life.

        Also I don't think that's correct; that's a ton of driving! I strongly suspect the number you're citing is the number of miles an average American spends in a road vehicle, not actually driving it. But that counts the same "car-mile" multiple times for all the occupants, when the statistic we're arguing about right now is about the vehicle, not the occupants.

  • furyofantares 2 months ago

    > This seems like a weirdly high number to me, it's very rare to find any vehicle that reaches 100k miles without at least one bump or two requiring repair.

    It goes seem like a high number to me - in 30 years of pretty heavy driving I've probably done about 500k miles and I've definitely had more than one incident. But not THAT many more than one, and I've put 100k miles on a few vehicles with zero incidents. Most of my incidents were when I was a newer driver who drove fairly recklessly.

  • jsight 2 months ago

    Somewhat amusingly, the human rate should also be filtered based upon conditions. For years people have criticized Tesla for not adjusting for conditions with their AP safety report, but this analysis makes the same class of mistake.

    1/500k miles that includes the interstate will be very different from the rate for an urban environment.

  • senordevnyc 2 months ago

    Yeah, I think that might be the stat for “serious” accidents

tigranbs 2 months ago

IMHO, this is not too bad! But obviously, coming from the software product industry, everyone knows that building features isn't the same as operating in practice and optimizing based on the use case, which takes a ton of time.

Waymo has a huge head start, and it is evident that the "fully autonomous" robotaxi date is far behind what Elon is saying publicly. They will do it, but it is not as close as the hype suggests.

  • verteu 2 months ago

    It's pretty bad given there's a Tesla employee behind the wheel supervising.

    • 93po 2 months ago

      There's no evidence the supervisor was behind the wheel for any of these, and I've never seen footage at all where they werent in the passenger seat

      • verteu 2 months ago

        Thanks for the context, I didn't realize the supervisor sits in the passenger seat in Austin. They do have a kill switch / emergency brake, though:

        > For months, Tesla’s robotaxis in Austin and San Francisco have included safety monitors with access to a kill switch in case of emergency — a fallback that Waymo currently doesn’t need for its commercial robotaxi service. The safety monitor sits in the passenger seat in Austin and in the driver seat in San Francisco

        • 93po a month ago

          Waymo absolutely has a remote kill switch and remote human monitors. If anything Tesla is being the responsible party here by having a real human in the car.

          • verteu a month ago

            If they're more responsible than Waymo, why are they crashing more?

  • phyzome 2 months ago

    ...why would "more than 10x less safe than a human driver" be "not too bad"?

Keyboard Shortcuts

j
Next item
k
Previous item
o / Enter
Open selected item
?
Show this help
Esc
Close modal / clear selection