Watch Zoox’s autonomous car drive around San Francisco for an hour
venturebeat.comInteresting. Watch at 1/3 speed or so to see it in real time. (Self-driving car videos tend to be published sped up, so you don't see the mistakes.)
The key part of this is, how well does it box everything in the environment? That's the first level of data reduction and the one that determines whether the vehicle hits things. It's doing OK. It's not perfect; it often misses short objects, such as dogs, backpacks on the sidewalk, and once a small child in a group about to cross a street. Fireplugs seem to be misclassified as people frequently. Fixed obstacles are represented as many rectangular blocks, which is fine, and it doesn't seem to be missing important ones. No potholes seen; not clear how well it profiles the pavement. This part of the system is mostly LIDAR and geometry, with a bit of classifier. Again, this is the part of the system essential to not hitting stuff.
This is a reasonable approach. Looks like Google's video from 2017. It's way better than the "dump the video into a neural net and get out steering commands" approach, or the "lane following plus anti-rear-ending, and pretend it's self driving" approach, or the 2D view plane boxing seen from some of the early systems.
Predicting what other road users are going to do is the next step. Once you have the world boxed, you're working with a manageable amount of data. A lot of what happens is still determined by geometry. Can a bike fit in that space? Can the car that's backing up get into the parking space without being obstructed by our vehicle? Those are geometry questions.
Only after that does guessing about human intent really become an issue.
> Watch at 1/3 speed or so to see it in real time...
The note in the top-right says its 2x.
Top-left
It really really bothers me that these folks are using a live city with real, non-volunteer test subjects of all ages (little kids and old folks use public streets) as a test bed for their massive car-shaped robots.
It's bad enough that people are driving cars all over the place, car collisions have killed more Americans than all the wars we've fought put together.
I'm one of those people who say, "Self-driving cars can't happen soon enough." But I don't think that justifies e.g. killing Elaine Herzberg.
Ask yourself this, why start with cars? Why not make a self-driving golf cart? Make it out of nerf (soft foam) and program it to never go so fast that it can't brake in time to prevent collision.
Testing these heavy, fast, buggy robots in crowds of people is extremely irresponsible.
There is a different perspective that you could use (and I’m not necessarily advocating for it; hear me out):
Human driven cars are dangerous to the tune of ~36,000 deaths per year. Every year without the implementation of full self driving we pay some large percentage of that number in lives. Self driving cars won’t make it out of the lab without real driving on real roads in real scenarios. Taking appropriate precautions (a human safety driver, maybe two) and testing in the real world might save more lives overall than keeping the vehicles in a more lab-like setting for longer, and missing some of the complexity of the real thing.
What about the nerf-golf-cart idea?
Most of these companies run closed-circuit tests (aka nerf-golf-carts) and 1000's of hours of simulation. In the end, the only test that matters is the real-test. Real speed (sensor/computation latency et al.), real sensor feeds (lighting et al.) and real car size (momentum et al.). Source: Am Controls engineer.
There WILL be bugs and un-modellable sources of error. The real hedging in these situations is the safety driver. The death of Elaine Herzberg is very regrettable, but the fault ultimately lies with the safety driver and the training that was offered to her. She was on her phone, like 1000's of drivers are now.
> Most of these companies run closed-circuit tests (aka nerf-golf-carts) and 1000's of hours of simulation.
I don't think we're talking about the same thing.
I mean build a machine that, in the real world, can't hurt people.
Make it light.
Make it soft.
Program it to limit its speed such that it can always stop before colliding with whatever (whoever) might leap out in front of it.
If the top speed is five miles per hour, so be it.
The safety driver is wrong too. But she was there because Uber wanted to put car-shaped robots onto public streets.
Really the insane thing is that we mix car and pedestrian traffic at all in the first place. Oddly enough, it's the result of a deliberate campaign of propaganda: https://www.youtube.com/watch?v=-AFn7MiJz_s "Adam Ruins Everything - Why Jaywalking Is a Crime"
> Adam reveals the derogatory origins of jaywalking and explains how the auto industry made it illegal.
"If the top speed is five miles per hour, so be it."
You are not really testing though. The whole point is to build a machine that can go the speed limit. You can test all you want at 5mph, but let me assure you, most of the real issues will show up when you go 45 in the real world.
> The whole point is to build a machine that can go the speed limit.
Sure, eventually, when the machines and sensors and algorithms and so on are good enough. When the infrastructure can be rebuilt to accommodate them (e.g. Boring company tunnels for auto-trucking, sensors and comms in the roads and signage, &c.)
The rush to market is the whole problem. Not the ultimate concept.
I want a machine that can take my mother to her doctors appointment now that the dementia has gotten to the point where she shouldn't take the bus on her own anymore.
Let the experiences with the toy cars guide the incremental graceful adoption of faster machines.
> most of the real issues will show up when you go 45 in the real world.
Right! So don't go 45.
Sorry you’re asking for a completely different concept. Asking for the infrastructure to change to adopt AV’s is a pipe dream. Never going to happen.
“ I want a machine that can take my mother to her doctors appointment now that the dementia has gotten to the point where she shouldn't take the bus on her own anymore.“
This meanwhile is already happening[1]. We’ve been testing on toy cars for 15 years. We’re not ready to remove the safety driver but we are ready for the road.
[1]https://www.theverge.com/2018/1/10/16874410/voyage-self-driv...
> you’re asking for a completely different concept.
Yes! Exactly! Robot-shaped cars that weigh multiple tons at travel at 45mph are a bad idea! It's too soon.
> Asking for the infrastructure to change to adopt AV’s is a pipe dream. Never going to happen.
Sure it will. There will be "smart dust" in the asphalt, etc.
> This meanwhile is already happening
Fantastic!
> We’ve been testing on toy cars for 15 years.
How about 30?
If my robots go out into the world and kill people I'm going to feel bad for making killer robots even if they look like cars, have people inside them, and everybody else is killing people with their cars.
Is that so goddamned crazy?
Don't make killer robots.
"Am I going crazy? Or is it the world around me?"
> We’re not ready to remove the safety driver but we are ready for the road.
That sounds wrong on the face of it to me, but let's grant it for the sake of argument.
Build yourself a city, populate it with people who have signed waivers, and use that as your test lab.
Self-driving Lark? https://en.wikipedia.org/wiki/Mobility_scooter
"Sure it will. There will be "smart dust" in the asphalt, etc."
Even if you want this, the best way to get the government to move is to deploy, prove customer demand/appetite. Everythings going just to plan and early statistics are showing fewer people will die with each autonomous car on the road (with zero infrastructure changes).
The killer robots are already out there, they are being driven by distracted humans killing 35000 people per year (who signed those waivers?!). This is trying to rapidly fix that problem.
"How about 30?"
Yup. Like I thought, moving goalposts. If we did 30, you'd say why not 45? So I gotta move on.
I imagine reducing the weight of the vehicle would effect it's stopping distance, traction, turning radius, a bunch of other car physics that would in turn produce unreliable data to train on.
I assume much of the training is done in simulation or within a controlled environment but unfortunately the only way to train for city driving is to gather as much real world data as possible and that means "testing in production" with a hopefully alert humans (one for backup) behind the wheel.
Speed is an important parameter: you can’t do freeway driving in a golf cart. IMO regular vehicles can be safe enough with appropriate supervision. It’s really once you get to the first runs without safety drivers where your backup goes away. Hopefully that will be only after your intervention rate is zero.
We can work our way to that point (freeway driving) without killing people.
Really, the problem is the rush to market not the idea itself.
That’s the heart of the original argument I posed on this thread: we really should rush to market because the status quo is quite unsafe.
The status quo is insane, IMO. I call our mixed ped/car transportation networks the "mayhem lottery" (As in, every time you go out there you're taking a chance that you won't come back with all your limbs, or your life.) It was years ago, but a neighbors son was crossing the street in a crosswalk when someone ran a stop sign and knocked him fifty feet. I've twice seen little old ladies lying dead in the street from hit and run drivers.
Making bad imitations of KITT from Knight Rider is not the solution here.
https://en.wikipedia.org/wiki/Traffic_calming is cheaper and easier, for example.
Look I want robot cars, okay? I even want them ASAP.
As soon as possible without testing robots on public ways.
Build a fake city, populate it with people who have signed waivers, test there.
Sure it's expensive but at least you don't risk killing more innocent people with your experimental car-shaped robots.
I mean, what if I built a television with H.E.M? (Human Eradication Mode) If it escapes and kills someone, isn't that my fault?
Maybe if I put a human in there, give her a phone to distract her, and call her "Safety Driver", I can deflect any blame for my robot's killing spree onto her and get away with murder scott free! It's the perfect crime!
Okay, sorry, I got a little carried away there. But I hope my point is clear: Safety: Yes. Robot Cars: also yes, but obeying the First Law of Robotics. Testing robots cars on innocent people: hard no.
In the video, which was recorded with a safety driver behind the wheel
I can live with it. Human drivers annoy me so much that throwing the dice on autonomous cars is not a big stressor to me.
I think you're missing the narrative that the self-driving industry is pushing here. They "solved the problem" and their fleets driving around "autonomously" is being done in order to demonstrate this to the public. A golf cart is obviously unsuitable for that purpose.
I think this narrative has run out of steam at this point, by the way. Waymo's valuation has gone from $175B to $105B to $30B since 2018. Zoox specifically is now laying off engineers.
Waymo has had a single round with outside money, and that valued it at $30bn.
Wait, Zoox was valued at 175B at one point? Is this true? Can you point me to any resources on this?
GP said:
> Waymo's valuation has gone from $175B to $105B to $30B since 2018. Zoox specifically is now laying off engineers.
Waymo.
They said Waymo had that valuation.
You can't learn to operate in environments you don't train in. It would be great if we had a solution to the out-of-distribution inference/reward problem, but I don't think it really exists.
I'm firmly in the "Perfect for freight, questionable value for consumers" camp WRT autonomous cars. I also think it's irresponsible to do this but the reality is, they are doing all the socially "appropriate" things, like get approval from the city.
It's one of those things where I wonder if I'm not being too much of a curmudgeon. I'm sure the case could be made that these things will reduce overall traffic deaths long before they become perfect drivers, eh
Yeah I don't buy that argument. These systems would have to be at least human level and also so pervasive that it would be almost like an inoculation.
At that point there are cheaper and easier ways to do that, which by the way are already happening. If you buy a modern car they have very impressive look ahead/smart cruise and lane keeping systems.
I agree with you, I think. To me it seems that you pretty much need AGI to drive IRL. And there are a lot of cheaper and easier things to do in the meantime. Cheers
> It's bad enough that people are driving cars all over the place, car collisions have killed more Americans than all the wars we've fought put together.
Do you want this to stop? Then we're going to have this people test their self-driving cars in a real environment. The more we delay this, the more people are dying because of car accidents.
Those were already made years ago at the start of SDC innovation. A few companies are way beyond the worst human drivers now, there's already a massive amount of motor vehicle death caused by intoxication that we should worry about. Not fantasy robodeaths that we can count on one hand.
> fantasy robodeaths
They're not a fantasy, they've occurred. The reason we can count them on one hand is because few cars currently drive autonomously and there is a fail-safe human at the wheel who (most of the time) is paying attention to the road.
Even if autonomous cars are better than human drivers they will still inevitably strike and kill pedestrians and vehicle occupants; they are not a magical solution to vehicle collisions.
I used to live in north beach on grant and Union, and these cars were driving by constantly. I knew right away that they would show it there.
I forgot how annoying it was.
Elaine Herzberg was killed by a human driver not watching the road.
My understanding is that the software in the car detected Herzberg and could have stopped the car in time to avoid the collision but that that subroutine had been disabled due to to many false positives. The "safety driver" (in quotes because she turned out to be both unsafe and not actually driving at the time) is also at fault.
Certainly, Elaine Herzberg wouldn't have been killed by that car if it wasn't there, eh?
Don't test killer robots on the public.
What is the general view on Zoox's progress relative to other non-waymo playes. Such as Argo, Aurora and Cruise. There is the widely reported disengagement per mile, but most robotics people know it is just smoke and mirrors meant to make the regulators go away (disclosure, studied/researched robotics in grad school).
The general consensus among my AV friends (who work at a bunch of different companies) is that their AV driving stack is really good, but obviously not perfect.
I have no idea about their business model and how COVID affects that, though.
Relative to competitors Zoox's automous OS is doing quite well and doesn't get enough respect. Relative to the objective everybody is fucked.
Could you provide more context on the first part.
They've been keeping abreast.
The co-founder, Tim Kentley Klay was somehow able to get Jesse Levinson on board, and Jesse Levinson had no problem getting infinite street cred on board. So they were able to attract a lot of key, original robotics talent before the hype got out of control.
For a long time though, they were low on funds, so they did lots of closed course testing, and it wasn't until they closed a large funding round that Zoox began on public roads, and they performed quite well right out of the starting gate.
Now Zoox and it's competitors are lost in an endless wasteland of testing, development, and validation. It's futile to attempt to do a comprehensive analysis between the different players, they all have their quirks, but Zoox has built all the critical infrastructure needed to do full scale testing, and they're eyeballs deep in it like everyone else.
However, Zoox has stormy waters ahead financially. They need another $2 billion to stay abreast in this never ending race. It's getting harder to visualize scenarios where that happens.
What nobody can do well enough to build a competitive and scalable robotaxi service is prediction in multi-agent scenarios. The AI for that just doesn't exist.
Multi-agent refers to behavior of pedestrians/cyclists, and other cars on the road. This is especially tough in "ambigous" junctions such as roundabouts, and unprotected left turns. There the strategy to negotiate the junctor is highly context dependent, and the information needed to find a strategy is not in the current scene. Drivers in these moments draw on "cultural awareness" of what "should" be done. Observing a history of what people do in these situations may not be sufficient because of the long tail of unique events, or at least unique in terms of how the computer will represent the scene. For example, if the scene is represented by the set of trajectories (or really waympoints), then the set of possibiilties is infinite. All of this assume the car "knows" it's entering and exiting a predefined scenario such as roundabout, real life driving is not so discrete.
On top of this, there's a liability and ethics issue. We accept teenagers for getting drunk and killing people, but we cannot accept an autonomous car that cannot navigate a roundabout which would otherwise be easy for a person, sober or otherwise.
I have faith in robotaxis abilities to handle safety critical things. The lizard brain stuff is under control. They are still just too stupid to navigate complex traffic efficiently, without regularly hesitating and getting tripped up.
Robotaxis are Rube Goldberg machines, there are so many moving parts. The running joke at Waymo for a while was "How many engineers does it take to operate a self driving car?"
Everybody was convinced deep learning would give us all the magically brilliant AI we needed to make this work. With perception and classification problems the robotics industry was able to go from "impossibru" to "holy shit it works" over the space of a couple years, it was really exciting. In hindsight it's easy to see that the exciting and game changing breakthroughs were in fact a long time coming, and that the real rate of progress in open world robotics is in fact excrutiatingly slow and bespoke. Nobody has an ace up their sleeve.
Can you elaborate on the moving parts? Is it just too expensive to install/maintain the sensors, or do you mean the algorithm has to deal with multiple inputs/decisions?
I could envision a scenario where a city council of a less populated city with lax regulations could deploy robotaxis to their economic advantage. Do you think we'll get there soon?
Not OP, but it's as much the algorithm having to deal with different inputs as the engineers having to build all the needed systems together.
Hm yeah sounds like what I have been hearing too. But the line engineer inside at Waymo are very optimistic at how close we are, maybe it's just the sentiment of the moment.
So in the scenario where predicting pedestrian/cyclist behavior holds up progress for a few more years. And given how the market has turned in SV and beyond, what's your read on how the space will play out? For example, car companies can't keep funding Aurora/Cruise/Argo because they will be facing very tough consumer climate, so the fight for funding internally will be even fiercer. Softbank funds Nuro and its portfolio of companies (WeWork and others) have been duds.
Google is expecting a bad 2020 ad revenue wise, unclear what will happen in 2021. The founders stepped out last year and the narrative has been that Google is less focused on "moonshots" and more on core ad business.
Is there any other deep pocketed investors that will finance development of AVs for another 5 years? Who will acquire the ones that are independent? IPO doesn't seem likely for any of them correct?
What are some of examples of multi-agent scenarios they struggle with? Do you think there are paths to autonomous driving where we add infrastructure or laws to reduce the universe of these scenarios that would have to be dealt with? For example, adding dedicated autonomous driving lanes or reducing the amount of intersections between pedestrian walkways and roadways?
Thanks for the info. Can you clarify what predictions in multi agent scenarios are?
Imagine an uncontrolled intersection. The Robotaxi is approaching from one direction. In the opposite direction is a cyclist who intends to turn left across the Robotaxi. There is also a pedestrian that may or may not cross the street, and another vehicle about to cross in front of the Robotaxi from the other direction. There are a huge number of ways this scenario can play out, and any decision made by one agent can affect the behaviors of all the others, compounding it's complexity. Humans can game out these situations intuitively, but current AI cannot read deep enough into the matrix to deal with these situations quickly and reliably.
In Australia there are no uncontrolled intersections (that I am aware of). Every single junction clearly marks who must give way and we don't have any 4-way stops, instead using roundabouts in these situations.
It's possible that for self driving to work road systems will have to be more formalised to remove the ambiguous situations you've described. I can't imagine it working well in China or Indonesia where traffic flows much more like water in a stream and lanes are merely just suggestions.
There are definitely uncontrolled intersections once you get out of the cities. My understanding is that if unmarked, there’s an implied give-way at the side road in a T-junction and all roads in a four-way junction.
I imagine self driving cars won't be outside of cities for years and years.
Outside of cities on limited access highways seem like the much easier situation--and, frankly, a pretty significant win for both comfort and safety once people give up their dream of having a personal chauffeur for their entire lives. It's self-driving in e.g. Manhattan or Boston that I can't really begin to imagine in less than decades.
Also people regularly just ignore the lane markings (and break all the other rules too...). You have to consider all those cases as well.
What is progress like on dealing with hand signals (or other gestures) from law enforcement or construction workers directing traffic?
There are rules to that situation. You can start there, let other cars go ahead of you if they break the rules, go slow and not hit anything. It isn't as if self driving cars can't look to the side or stop if something changes. Beyond that Jim Keller would say that not getting hit by something else is a matter of ballistics.
I think was is meant is: in case you have a few people and AVs in an interaction, predict who's going to do what in order to best anticipate the overall outcome. Not sure humans can do that outside of conversation and norms and rules.
> There is the widely reported disengagement per mile, but most robotics people know it is just smoke and mirrors meant to make the regulators go away
Are you saying that the numbers are inaccurately reported, or accurately reported but just don't tell the whole story?
Each company gets to decide for itself what qualifies as a disengagement and each event’s severity, and the formula used is a VERY closely guarded secret.
Yawn. Good lane markings, no rain/snow or other bad weather, perfect road surfaces.
Just like all other self-driving demos. I'd like to see a demo like this on snow covered roads, with no lane markings visible. I think that would tell a lot more about the system's ability to deal with an imperfect world.
Well, universality is not necessarily a useful end goal. Lyft is a successful company that doesn't operate even in Canada. A solution that works only in coastal California may well be sufficient.
Lots of things come to the Bay Area and Los Angeles before anywhere else. Partly that's because coastal California is an innovation hotbed. Partly because it's a single large rich market. Since one of these that succeeds entirely in the safe parts of California would be an incredible game-changer on its own (door-to-door small-group spikable public transit!), it's still amazingly exciting.
And while lots of Americans view many things as unchangeable, that's not the case in many other places. In China, if you were to talk to public planners about how autonomous vehicles will handle detours, they'll just say, "Oh, we'll use transmitters to tell you. We can sign the transmitters so you know they're trustworthy." Everything about the universe is mutable.
Yep, no ice road truckers will be autonomous in the next year, and that's okay.
Not to take away from your point but Lyft does operate in Canada, now.
> door-to-door small-group spikable public transit!),
What does spikable mean here?
Sorry, couldn't think of a good word. Rapid increase in supply to meet demand. Because the units are smaller they are more fluid. Because they are autonomous storage doesn't have to be centralized and they aren't subject to human scheduling constraints.
Ah gotcha, thanks for clarifying.
Road infrastructure is going to change, by necessity. It seems like self-driving technology is as good as it can be, given current circumstances. There's no way to get self-driving cars to airplane safety numbers without on/near road devices/reflectors/computer-readable signage/etc, edge compute, better pedestrian understanding of what the cars are seeing and are capable of reacting to, and probably much more. It's time to give it the infrastructural boost it needs to become an everyday reality. We need to put sensors in the road when they're re-paved, transmitters in signs with solar chargers when they're replaced, LIDAR reflectors on the road sides and in medians, start offering clothing/accessories with transmitters or reflectors that clearly identify people as pedestrians...
Is the reason all of this makes more sense than just building tracks and trains just the fact that there’s an evolutionary path to get there with incremental releases along the way?
Because every time I hear this kind of thing I keep finding myself asking why/whether mass transit systems aren’t just the same end state?
I used to believe that self driving cars were a panacea for mobility. Then I moved to Boston, sold my car, bought a bike and realized that in dense urban cities, cars are the enemy, doesn’t matter if the are Electric and autonomous. They generally don’t fit in cities. Cities should be built for people not for single occupancy high speed, deadly cars.
well the ideal cheap ride-sharing AV world would have much more spread out cities, no?
IMO, the reason mass transit isn't popular in moderately populated areas (think population per sq mi/km) is that it's too inconvenient. In these places, it's much, much quicker to hop in your car and get where you're going than to try to use the public transportation system, which is a pretty sparse set of bus routes in most places.
I'm a firm believer in Elon Musk's vision for public transit, wherein you may own an autonomous vehicle that is hired out for rides by others via something like Lyft. If you choose to own a car, it'll sit at your house when you choose, but can go out and make you money while you have nowhere to go. At this point, you can imagine that there are detractors from this idea - namely those who would profit from owning all the vehicles, those who manufacture traditional vehicles, and the fossil fuel industry, to name a few. Those people are the ones who will keep us in a state of limbo as long as possible from a legal and infrastructural standpoint. We have to decide that this future is better than the one we're in. It'll have a vast impact on pollution, anthropological contributions to climate change, and human equality and prosperity.
All we have to do is start demanding progress and stop accepting mediocrity.
As far as I know "navigating a busy parking lot while raining" is a problem the autonomous car industry does not even have an idea how to solve.
In a parking lot the car would be starting from a position it can stay in, so requiring a person to intervene is one option. Also busy parking lots frequently don't stay busy forever and heavy rain doesn't last forever either.
A look at the local weather could see where a storm is and give an estimate of when it will be able to automate leaving and require a person otherwise. I think there are pragmatic answers to extreme situations.
As soon as you require human intervention, you require a sober, licensed human to be in the vehicle at all times. You no longer have a robo-taxi. It can also be hard to predict weather in much of the US. And as soon as you place too many restrictions, you no longer have a reliable transportation option.
Imagine your public transit system only ran in good weather. How useful would it be?
> As soon as you require human intervention, you require a sober, licensed human to be in the vehicle at all times. You no longer have a robo-taxi.
You have a robo-taxi that works when it's not raining, which is still pretty good (assuming it can safely disengage/pull over if it thinks the weather is getting bad).
> Imagine your public transit system only ran in good weather. How useful would it be?
I and many other Americans live in places that don't have any public transit, so a few robocars that only worked during the day would be a huge improvement.
Wait. So you're now going to just dump me at the side of the road if it starts raining?
I appreciate the general point that autonomous driving that only works under some conditions would be useful. But I think it's more along the lines of handling highway driving under most circumstances which already pre-supposes a licensed sober driver that can take over control with a minute or two notice.
I already have an option for local driving. It's called taxis/Uber/Lyft and robo versions won't be all that much cheaper.
Hmm. If I'm a city administrator, I'm not going to license fair weather robo-taxis to operate in the city. They would drive out many other taxis and then people can't get home when the weather turns bad. Anything else is just bad resource management.
This person said navigating automatically out of a crowded parking lot in the rain. You shifted this to just driving in the rain.
I'm confused. Are you asserting that driving in actual traffic in the rain is easier than navigating a parking lot in the rain? Yes, several of us generalized the issue. I don't see why that is troubling.
Because the generalization changes the question. There is a world between "can drive 90 % of the time on 90% of roads" and "can drive 100% of the time on 100% of roads." The former is still extraordinarily valuable, the latter effectively impossible. When you conflate the two there really isn't even a point in having a discussion.
In the given example "driving in a parking lot +rain" it's completely reasonable to pass the buck to the human driver. In your example "driving +rain" you can't because that situation occurs well more frequently.
> In the given example "driving in a parking lot +rain" it's completely reasonable to pass the buck to the human driver.
People like me are only looking forward to completely autonomy -- I am forbidden to drive, you see.
And it gives me some perspective... I believe society would benefit enormously if it didn't treat "everyone can drive" as a truism. If you were to break down what the concept "drive" means in terms of simultaneous tasks a human most be capable of performing you would quickly see how utterly ridiculous it is -- with devastating results in how urban environments have transformed and how many deaths are on the roads.
> People like me are only looking forward to completely autonomy -- I am forbidden to drive, you see.
This thread is about automated driving from a systematic point of view, it isn't about you.
> if it didn't treat "everyone can drive" as a truism.
No one said that or implied that anywhere here. You hallucinated some sort link between this conversation and your own frustrations.
The former is very valuable so long as:
- The handoff of control to a person is well-defined and not too sudden
- You accept that autonomous driving is essentially a convenience and safety feature with a competent driver behind the wheels at all times. No using an autonomous car to drop little Jimmy off at soccer practice. (And no summoning a shared robo-taxi service.)
If it was a robo taxi and not your own car, you could go take a different robo taxi.
Also, predicting the weather a week out isn't the same as giving an ETA when you can see a map of the storm happening live. That is a completely separate scenario.
> And as soon as you place too many restrictions, you no longer have a reliable transportation option.
yes you do. There is a whole lot of driving that happens in perfect conditions.
Think "Truck driving, on long highways".
Replacing half of all truck drivers is still a trillion dollar industry.
And for your robo taxi situation, you can simply only allow the robotaxis to run half of the time.
Human drivers, can simply get in their cars, and be paid to drive, when it is likely to be unsafe driving conditions. IE, surge driving/pricing.
But there absolutely would be days/times/places where the robo taxi would be perfectly safe. And these days/times/places could be predicted easily, ahead of time.
IE, in arizona, I am sure that it is safe most of the time, and there would not be any issues with snow.
Where are you going to get all these truck drivers and taxi drivers to work on call when the robots can't?
>IE, in arizona, I am sure that it is safe most of the time, and there would not be any issues with snow.
Parts of Arizona.
> Parts of Arizona.
Sure. Thats my point. And that is still very valuable.
> I'd like to see a demo like this on snow covered roads, with no lane markings visible.
But humans can't drive well in those situations either. Why are you asking for something better than humans can do?
Humans can and regularly do so pretty safely.
Ask Canadians, Swedes or any other people living in a location with long winters.
I was once driving on a road I could not see at all. It was at night, in a blizzard on the road from Denver to Vail. It didn't take long until I was following the two red lights of the bus in front of me. As a human, I knew I could drive safely where the bus had been driving seconds ago. A self-driving car would have... tell me.
Pulled over, like you should have done.
I've been exactly where you were, driving the Coq highway in British Columbia at night in a blizzard, following two red dots in front of me. I had (mandatory) snow tires on a rear wheel drive BMW. I also had my family in the car.
It was probably the single stupidest thing I've ever done driving a car.
You can’t pull over in that situation unless you want to get your car stuck, be stuck in the cold all night, and potentially rear ended.
Are you sure those two red lights can see and know where they are going?
> I was once driving on a road I could not see at all. It was at night, in a blizzard on the road from Denver to Vail. It didn't take long until I was following the two red lights of the bus in front of me.
Perhaps you should have pulled over at this point? Maybe that's what an autonomous car would do.
Pulling over in a snowstorm to the side of the highway can be pretty dangerous too. Once you're in that situation there aren't necessarily great alternatives.
Yeah, I think people who believe that pulling over is an easy solution may not have been in one of these situations. It can come up quickly and there may be no exit for miles. Pull over into what? The snow bank on the side of the road? Now you are a hazard for everyone else coming along behind you. Sometimes you just have to make it work as best you can until the circumstances change.
Why is any traffic moving in conditions this bad? Just stop.
Again, I assume you've never experienced this. Storms can come up fast in the mountains and be very localized. As I, and the previous poster mentioned, it can be more dangerous to stop sometimes.
If you can't see, then it's always safest to just stop rather than driving on, potentially off the edge of a cliff.
How about "Dont use the self driving mode during a snowstorm. Make the human drive".
That sounds like it solves the situation nicely.
That definitely works. It just means that if there is a possibility of a snowstorm or other circumstance that self-driving doesn't work, you now require a driver who can take over control. Which is fine. It just eliminates the robo-taxi use case of self-driving which is what a lot of people seem to care about.
I lived in Boston for 10 years. People drive on snow-covered roads just fine.
People in San Francisco struggle to drive in the rain let alone the snow.
Ha, we joke about that in Oregon, too, and it rains quite a lot here. And after a long, rainy winter, it gets a little nuts for a few days when the sun comes out again, because it seems like everyone has forgotten how to drive on sunny days.
Because "something better than humans can do" is the whole selling point of self-driving cars.
And plenty of us humans can and do drive reasonably-safely in snowy/icy conditions. It takes practice, like anything else driving-related, but it's something that most drivers north of the Mason-Dixon Line likely have quite a bit of practice with and have to handle a significant fraction of the year. It's not unreasonable to hold self-driving cars to the same standard.
> Because "something better than humans can do" is the whole selling point of self-driving cars.
I don't think so. 'As good as humans can do' would be useful.
Nope, "as good as a human" shouldn't be allowed on the road or on the market. Errors that are allowed for humans should never be allowed for a machine.
> Nope, "as good as a human" shouldn't be allowed on the road or on the market. Errors that are allowed for humans should never be allowed for a machine.
I don't understand why you'd have that opinion. If it's no riskier and relieves people from having to drive then that seems like a net benefit to me.
For the same reason why when a human pilot makes a mistake it's a mistake, but when an autopilot malfunctions every single plane of that type is grounded until the issue can be found and fixed. Machines cannot be just as good as humans, they have to close to perfect when human life is involved.
As another example, imagine if you had a radiotherapy machine, when operated manually it randomly kills 1/10000 patients, but when operated by AI it randomly kills 1/100000 patients. Yet I'm 100% certain even though it's a 10x improvement over a human operator it still wouldn't be allowed on the market.
> even though it's a 10x improvement over a human operator it still wouldn't be allowed on the market
Hmm I don't agree I think people would go for that.
Then look up Therac-25, because that's roughly what happened
> Therac-25
That wasn't AI - it was a concurrency error, wasn't it?
You shouldn't discard technology because sometimes it's wrong. It can be better overall.
Why would it matter if it was AI or not? 99.9% of the public won't care if an "AI" killed their mum in a car accident or if it was a badly written for loop somewhere. Literally irrelevant. If a computer makes an error people will want blood(as in - they will sue the company to absolute death, just because a human would make the same mistake or worse is irrelevant).
> Why would it matter if it was AI or not?
I think people's idea of what a computer should be doing has changed a lot since then, due to the common knowledge of AI applications.
Perhaps we're close to the situation where when a person makes a mistake they're asked 'why weren't you using the computer?' I've seen this happen myself.
Your original comment asserted that a machine _shouldn't_ be allowed on the market until its performance is significantly superhuman, but your responses in this thread just repeat the assertion. What's the actual rationale for claiming that we should leave a net benefit on the table (eg if human-level driving performance improves transit efficiency)?
Exactly.
If we're willing to settle for "as good as a human" in autonomous vehicles, then IMHO all this expertise, R&D, time, effort, money, etc. would be better spent on the public transit and/or active mobility solutions of the near-future.
Unfortunately public transit is considered an epithet where I live because it brings crime from the city into my "idyllic crime free" suburb just 10 miles away from city center (no joke). I hate driving and have put my eggs into a self-driving (hopefully super low emissions) taxi service to come online.
They're probably not even detecting the lane markings or road surfaces as they're baked into the HD map.
Yes. As they're leaving the Broadway tunnel, the map shows crosswalks and lane markings on the side that belong to a street above the tunnel.
> Yawn. Good lane markings, no rain/snow or other bad weather, perfect road surfaces.
Ok... What if I were to tell you that there is a solution to this?
The solution to this, is simply "dont drive in those conditions".
A self driving car can't get in a wreck that is caused by snowy roads, if it simply doesn't drive in the snow.
Self driving, during perfect conditions is still extremely valuable, because it turns out that there is a whole lot of driving that is in perfect conditions.
So, you would do things like prevent the taxis from running, if there is any chance of rain at all. I am sure that there are lots of places where rain is not an issue, and rain could be predicted ahead of time. Not everywhere. But still in many places.
> Good lane markings
did you see that 5 lane intersection going over a tram lane? I myself had no idea where I would have driven there.
I’m pretty sure I can’t meet that standard.
Actually I'm sure you'd do just fine. Humans can just pretty effortlessly deduce required information out of numerous clues.
You lot are always so fixated on snow. Give it a decade or two and with global warming, the environment might meet us halfway on that one.
Doesnt climate change mean a higher probability of extreme events, and hence higher chances of severe winter storms/blizzard/snow?
I am out of my depth in terms of the topic we are discussing so I might be quite wrong.
Watching this, it's so frustrating that we're 95-99% there on autonomous driving.
That's the good old Pareto principle for you: the last few percent are going to take a lot more effort than the first 95%.
More to the point, this falls into the category of safety-critical systems, with the added wrinkle of potentially being used daily by millions of people. Unlike many domains where software is applied, 80% of the way there doesn't cut it, nor does 95% or 99% or even 99.9%.
(Leaving aside the fact that, for all of us not actively engaged in autonomous vehicle R&D, we likely have absolutely no idea how close we are to success here, or even what all the relevant goalposts would be.)
Remember that we've been at that level with voice recognition since the end of the last millennium.
No, voice recognition got markedly better in 2015-2016. Now, I regularly to tasks with vr, which was not really a thing in 1999.
Possibly for driving in cities and highways on clear days, but we are nowhere close to having autonomous vehicles even match human drivers in 100% of possible/likely driving circumstances and road/weather conditions. That last few percent is the highest hurdle.
more like 0.1% there in terms of the work required to launch
All in all I'm quite impressed with the demonstration. It was way more thorough than previous videos I've seen. The main things the car is failing at from what I see are the hard things: Object permanence and ad-hoc reasoning. So no surprises.
Regarding object permanence: I was impressed overall with their detection. Still, you could see kids walking close to parents blink in and out of awareness of the car. Now I'm not saying humans are very good at tracking a multitude of actors. So at some point the machines will be "good enough". But that point seems way off when significant objects like kids can just disappear from awareness when they pass behind a stroller.
And about the ad-hoc reasoning: They have the whole city mapped out! Including traffic lights and turn restrictions. I'm not even clear whether they try to detect the signs at all. I'd assume that they have an operations center that hot-patches the map with everything cropping up during the day. So the cars would send in unexpected changes to the road and they would classify those changes and patch the map. Meaning the car is tethered to that feed and not autonomous in the strictest sense. Sure, such a center would be marginal cost given a large enough fleet. Still it's a subscription you'd need for your own robocar.
They mention a lot of things they are prepared for. And I can't help but think "oh they're really good" when they say "detect backed up lanes" or "creep into intersections". But that always leaves the question what happens when they're not prepared for something. When the rules don't fit. Can the car go over a curb if the situation warrants it? Does it back out of a blocked off section? Is it even able to weigh whether backing out is an option at this point?
so I'd like to see a "what we're currently stuck at" video. But I understand one can't very well attract investors with such a video.
I agree with a significant amount of your point, but with regard to object permanence, I would guess that they have prediction algorithms that don't only rely on the current-time perception, so if something blips out of sight for a second the system will still infer/predict it's existence (for a time - obviously if something is hidden for a long time it won't continue to not trust perception).
I'd be very interested to know how that works. But I don't think they have it.
The boxes they draw are very wobbly and dimensions expand and contract directly with sensor input. Maybe they only show fused output (in itself an achievement) and there is a later step they don't show. That would be weird though because if they want to brag about their model they would definitely show it if it was any good.
that's a fair point, but it seems reasonable to me that they would separate the sensory input and the predictive/higher-level aspects of their modeling. For example, we know for a fact that they must be doing tons of prediction for both cars and people, so I think it's likely that different models (not sensory) have the info that a person is probably still there.
Yes it's true they must have some form of persistence when they do predictions. But expected trajectory of other vehicles and pedestrians was missing from their video. A lot of other interesting feeds were missing too, so I don't know what to read into it. I tend to think that that stuff would look much worse. But maybe they just didn't want to clutter the video or show how advanced they are already.
yeah, it's possible that the stuff doesn't look very good, but my guess (maybe my hope?) is that it's too cluttering or through careful analysis could reveal IP about their predictive algos
> Handling yellow lights properly, involves us having to predict how long they will remain yellow for
No. That isn't how yellow lights work in the US. If the light turns yellow and you have enough space/time to make a safe stop you do it. There's no need to predict the remaining time on yellow phase. We don't need robot cars bending these rules.
Not sure why you're being downvoted, but I think this is a classic example of why self-driving is so hard. They're not bending the rules, just copying what humans do. We also predict how long a light will be yellow for, but do it naturally (if you just saw it turn from green, or it was yellow as soon as it was in your line of sight).
In Delaware on Route 1 if you follow this advice you are likely to get rear ended. They have traffic lights on a 50mph route that stay in yellow for a long time.
I often find myself slowing down to a stop then awkwardly realizing I’m stopped with multiple seconds of yellow remaining and drivers honking behind me.
Maybe my brakes (or reflexes) are just too good?
No. That's what the law says but not how you drive.
Suppose your 4 seconds from a yellow light traveling at a high speed. You can slam on your breaks and make a very abrupt stop, or you can cruise through that light and continue on your way.
If the light is about to turn red you should probably slam on your breaks, because you risk being t-boned in the intersection.
If you have time to get through the yellow light/before the cross light turns green, you should keep going because slamming on your breaks is mildly dangerous.
The law isn't nuanced enough to understand this, with good reason. You don't want to make a bad call about the safest action made in good faith illegal.
This is really cool, but the environment is also really simple and I think we're definetly at least 15+ years out before self driving cars can handle somewhat challenging situations as well as humans.
Just try to put one of this vehicles in a situation with varying road width, no markings, snow with no sticks to mark the edges so you really have to pay attention to where the road actually is. What would this do if you meet a car on such a road? Try to figure out who should go back, and maybe go back to the latest plase where its wide enough? Do random tests to check for grip every now and then? It also needs to know whether the road is salted, understand if the salt is working and so on and on and on...
"we're definetly at least 15+ years out". Similar statements were made about Go the year it was solved. AV is a vastly harder problem and requires new techniques to get there, but AI can progress can happen any time.
This is super cool! I'm wondering how the car would react if:
* someone parked on the side opens their door too quickly and collide with the zoox car.
* there is a car not moving in front, and the zoox car cannot see what's in the other lane without backing to get more insight
I'm also super impressed at how it can understand where the lane is in this 5 lane intersection that crosses a tram line. Even I couldn't understand where I would have had to drive!
>I'm also super impressed at how it can understand where the lane is in this 5 lane intersection that crosses a tram line. Even I couldn't understand where I would have had to drive!
This is actually one of those things that's easier for an AV than a human since they have localization and full lane maps of the city.
The two turns (one left and one right-on-red) leading up to getting to Market Street in the latter half of the video struck me as odd; the left turn looked like a bit of a lane sweep, and the right-on-red looked dubious (is it legal to turn right on red if you're not in the far-right lane?).
SF intersections are hard, though, and the computer seemed to handle them about as well as I would've.
One thing they only mentioned casually towards the end is that they mapped the city beforehand. So the car is starting from a position where it knows all the intersections.
I see these Highlanders (I drive the same model) parked in their lots in FiDi almost every day. Glad to see what they’re up to.
I really appreciate the calm background music.
I think background music is important. Especially on such long explanatory videos but often it becomes a reason for me to turn off a video if the music becomes to aggressive.
Besides the sheer complexity of situations described in this video, I wonder how these vehicles will deal with differences in traffic rules in different countries (when even road signs can be different).
It sounds like it currently "cheats" a bit by already having driving rules, maps (including signs), etc. baked in; it'd be akin to a human driver memorizing the California Vehicle Code and a map of San Francisco word-for-word and lane-for-lane.
Presumably Zoox deployments in other cities would work similarly, "cheating" by baking in local driving rules and road maps. A consumer-owned self-driving car would likely be able to do something similar by downloading the local ruleset and maps on the fly, assuming one exists.
Is there a way to know that this isn’t done with remote control, other than the company says so?
If you think people are just going to simply lie to you then how do you ever get anything out of reading things on the internet?
By getting multiple opinions, like what the GP is presumably doing by asking such a question on a forum like Hacker News.
Such videos don't get published just because. They're either looking for more funding or for an acquisition. Which is it?
beggars can't be choosers
Casually starting the turn and not yielding to pedestrians at 10:21.
Companies actually put this kind of footage up without ever reviewing it?
Are you referring to the pedestrian who's almost crossed the crosswalk on the left side of the screen? This is still a proper yield as far as I can see. The car just enters the intersection before that person has finished crossing.
That's already the problem. Don't enter the intersection if you can not speedily finish your turn. There is also already another ped on collision course the moment they start moving forward.
While technically right, you'll never be able to get anywhere in a big city if you drive like that.
Immaterial. I don't want self driving cars driving like Bostonians or worse, New yorkers. Self driving cars need to follow the law, drive defensively, and be conservative. If that means that they take 10 additional minutes to get to their destination so what? The alternative is that the car kills someone because of impatience, that is to say an improperly weighted time value function, which I would think no one wants.
I mean it's not 100% of the way there. Plus human drivers do that all the time and MUCH worse things. I'm talking from the point of view as a frequent Uber/Lyft passenger.
Casually starting a turn and correctly identifying that the person on the right slowed down, stopped and turned to face the other crosswalk.
You can see it on the top right camera.
I take it the ped at 12:22 also "turned"?
This one looks like a regular situation handled like a regular driver would handle it: the person was a reasonably long distance away from the intersection and by the time the car got to the crosswalk it was probably going at a high enough speed when suddenly braking would cause problems.
But yeah, it does look as something to re-analyse.
This demo is not informative as to the readiness for scalable L4 deployment, for which it would be necessary to focus on the breadth/accuracy of perception features under the hood of intent prediction and what happens at the tail end with arbitrary situations that occur in urban driving environments.
does anyone make that claim?This demo is not informative as to the readiness for scalable L4 deploymentPresenting a subset of the information to let the uninformed jump to favorable conclusions for the presenter is not a new marketing strategy. If there's no indication about the true level of progress, what is the purpose of the demo?
> to let the uninformed
The uniformed don't know what 'scalable L4 deployment' is, so they can't jump to that conclusion.
No, but they're familiar with the definition (start-to-finish entirely autonomous trip under somewhat-controlled driving conditions) even if they don't necessarily know the lingo to describe it. Being able to get from point A to point B without human intervention is what people expect when they hear "self-driving car", and the video does little (if anything) to temper that expectation (perhaps because it truly is ready for L4 deployment, or perhaps because it's all smoke and mirrors).
> under somewhat-controlled driving conditions
I don't even know what this means, so I doubt the uninformed know the definition.
Meaning one can see the road, chiefly.
Cheap criticism: the video starts with (I paraphrase) "This is 1 hour of driving", the last thing I expected after the fade-out/in was to see a man with a weird shirt... and then I notice the video is about 27 minutes long.
Edit to add: After that I started watching it, it's actually a video of an impressive AI.
It's played back at twice the speed. Apparent from pedestrians waking twice as fast, and if course, the 2x speed indicator in the upper left.