
David Silver
Staff Software Engineer at Kodiak Robotics
This Nanodegree prepares you to design, build, and optimize self-driving car systems by mastering computer vision, sensor fusion, localization, and control through hands-on projects using C++, Python, and deep learning.

Subscription · Monthly
Welcome to the Self-Driving Car Engineer Nanodegree program! Learn about the Nanodegree experience, as well as hear from Waymo, one of Udacity's partners for the program.
1 hour
You are starting a challenging but rewarding journey! Take 5 minutes to read how to get help with projects and content.
Hear from Waymo, one of the most cutting-edge autonomous vehicle companies out there! You'll learn about the company as well as about the Waymo Open Dataset, which you'll use in parts of the program.
In this course, you will develop critical Machine Learning skills that are commonly leveraged in autonomous vehicle engineering. You will learn about the life cycle of a Machine Learning project, from framing the problem and choosing metrics to training and improving models. This course will focus on the camera sensor and you will learn how to process raw digital images before feeding them into different algorithms, such as neural networks. You will build convolutional neural networks using TensorFlow and learn how to classify and detect objects in images. With this course, you will be exposed to the whole Machine Learning workflow and get a good understanding of the work of a Machine Learning Engineer and how it translates to the autonomous vehicle context.
19 hours
Dive into Deep Learning for Computer Vision, learning about its use cases, history, and what you’ll build by the end of the course.
Machine learning is more than just building a model - getting each step of the workflow right is crucial.
Learn how to calibrate your camera to remove distortions for improved perception.
Build skills in linear and logistic regression before taking on feedforward neural networks, a type of deep learning.
Convolutional networks improve on feedforward networks for areas such as image classification - let’s get started building them!
Object detection builds on classification by finding multiple important objects within a single image - find out how!
Use the Waymo dataset to detect objects in an urban environment.
Besides cameras, self-driving cars rely on other sensors with complementary measurement principles to improve robustness and reliability, using sensor fusion. You will learn about the lidar sensor, different lidar types, and relevant criteria for sensor selection. Also, you will learn how to detect objects in a 3D lidar point cloud using a deep-learning approach, and then evaluate detection performance using a set of metrics. In the second half of the course, you will learn how to fuse camera and lidar detections and track objects over time with an Extended Kalman Filter. You will get hands-on experience with multi-target tracking, where you will initialize, update and delete tracks, assign measurements to tracks with data association techniques, and manage several tracks simultaneously.
25 hours
Get started with sensor fusion and perception, why they are important, and the history of their development in self-driving cars.
Learn about the lidar sensor, capable of capturing important 3D data in point clouds.
Detect objects from the 3D data coming in from a lidar sensor.
Use the Waymo dataset to detect 3D objects in the surrounding environment.
Learn from the best! Sebastian Thrun will walk you through the usage and concepts of a Kalman Filter using Python.
Build an Extended Kalman Filter that's capable of handling data from multiple sources.
Get your tracking skills ready for the real world by learning how to track multiple targets simultaneously.
Use the Waymo dataset, along with sensor fusion, to track multiple 3D objects in the surrounding environment.
In this course, you will learn all about robotic localization, from one-dimensional motion models up to using three-dimensional point cloud maps obtained from lidar sensors. You’ll begin by learning about the bicycle motion model, an approach to use simple motion to estimate location at the next time step, before gathering sensor data. Then, you’ll move onto using Markov localization in order to do 1D object tracking. From there, you will learn how to implement two scan matching algorithms, Iterative Closest Point (ICP) and Normal Distributions Transform (NDP), which work with 2D and 3D data. Finally, you will utilize these scan matching algorithms in the Point Cloud Library (PCL) to localize a simulated car with lidar sensing, using a 3D point cloud map obtained from the CARLA simulator.
16 hours
Meet the team that will guide you through the localization lessons, and learn the intuition behind robotic localization!
Are you ready to build Kalman Filters with C++? Take these quizzes to find out!
Learn the math behind localization, as well as how to implement Markov localization in C++.
Learn about and build two scan matching algorithms for localization: Iterative Closest Point (ICP) and Normal Distributions Transforms (NDT).
Learn how to align point clouds with ICP and NDT before leveraging them to localize a self-driving car in a simulated environment!
Localize a self-driving car within a point cloud from the CARLA simulator with the localization algorithms you learned in previous lessons - how accurate is your algorithm?
Unlike typical professors, our instructors come from Fortune 500 and Global 2000 companies and have demonstrated leadership and expertise in their professions:

David Silver
Staff Software Engineer at Kodiak Robotics

Thomas Hossler
Sr Deep Learning Engineer

Antje Muntzinger
Professor of Computer Vision


Aaron Brown
Senior Software Engineer

Munir Jojo Verge
Lead Autonomous & AI Systems Developer at MITRE

Mathilde Badoual
Fifth year PhD student at UC Berkeley

David Silver
Staff Software Engineer at Kodiak Robotics

Thomas Hossler
Sr Deep Learning Engineer

Antje Muntzinger
Professor of Computer Vision


Aaron Brown
Senior Software Engineer
Average Rating: 4.2 (45 Reviews)
It's a very intense and advance program is highly recommended for the professionals who want to build a career as a perception engineer. It contains all the aspects of self-driving car specialization: computer vision, sensor fusion, localization, planning and control. Additional chapters for career services and interview preparation will help you to start applying for the autonomous driving vacancies really fast after the graduation.
Exceptional program that equips you with the skills needed for the future of autonomous vehicles. Challenging projects, great instructors, and a supportive community make it a top choice for anyone passionate about self-driving technology
carla simulator failed all the time !!!! wait for several weeks, no way to solve it
This program provide the overall picture of Self Driving Car System. It allowed me to systematically learn the system.
Need more content for control part of this course(very basic).
Udacity's Self-Driving Car Engineer Nanodegree program is an advanced self-driving car course designed for those aspiring to become leaders in autonomous vehicle technology. The program covers critical areas such as machine learning, computer vision, and sensor fusion, providing hands-on experience in building and testing self-driving car systems. Our expert instructors, drawn from leading companies in the autonomous vehicle industry, guide students through real-world projects. This offers a unique opportunity to apply learning in practical scenarios, preparing students for exciting careers in this innovative field.