Mapping at Aurora
medium.comI don't have much background knowledge in mapping or self-driving, so I thought this was well-written in a way that anyone could understand! Re: breaking down the data framework into world geometry vs. semantic annotations— that makes sense, but how often would you need to update the annotations? Just running and biking around San Francisco, I feel like there's no shortage of construction that create changes to lanes/stop signs/lights and rules of traffic flow. How do you account for these, or would world geometry data be enough here?
I think you've hit upon one of the most challenging aspects of mapping, which is that the real world is incredibly dynamic. And it's not just the things that we think of as dynamic (bicyclists, pedestrians, other vehicles), it's also the geometry itself (lane lines, vegetation, building construction). So this requires some form of updating (either semantic, geometric, or both). And as you pointed out this is not uniform: dense city maps can change from week-to-week, whereas rural maps require less frequent updates. So first you need to build an on-board/offline system that can first detect these changes, then configure the pipeline so that the proper data is collected and updates can be compact and fast. As mentioned in the article, we have a git-like structure for Atlas updates which allows us to track small changes. And the fact that the Atlas is only locally consistent means that any update is contained just to the detected area.
Is there a way to know the "freshness" of the data? Obviously if I'm driving and a car in front of me just imaged the road I can be much more reasonably certain that there isn't any debris on the road surface, or new potholes that have opened up.
I think that there is grey-area between data that is sufficiently static to represent in a map (buildings, lane lines) and data that is dynamic and must be handled by on-board perception systems (pedestrians, road debris). A good way to think of this is that the map is encoding priors about the environment for use by the perception system. It may be possible in the future to do all of this on-board (like humans do!) but the computational constraints would be quite high.
To add to this: it's not just computational complexity, but it's also about safety. There is a human review process for every element of the map, which enforces a level of quality and safety that is extremely difficult to ensure from a fully on-the-fly system.
Hi all! I'm Greg and I work on the Mapping Team at Aurora. I'd be happy to answer any questions about the article and mapping for self-driving.