Skip to main content

How do self-driving cars “see”? - Sajan Saini

520,960 Views

10,112 Questions Answered

TEDEd Animation

Let’s Begin…

It’s late, pitch dark and a self-driving car winds down a narrow country road. Suddenly, three hazards appear at the same time. With no human at the wheel, the car uses smart eyes, sensors that’ll resolve these details all in a split-second. How is this possible? Sajan Saini explains how LIDAR and integrated photonics technology make self-driving cars a reality.

Additional Resources for you to Explore

Handing over a car’s steering wheel to a disembodied decision-maker cuts to the heart of our anxiety about an over-reliance on advanced machines. But our mastery of light-based technology and computers has already led to a reliable internet and cloud computing society. Check out this TED-Ed video on fiber optics and integrated photonics—light-manipulating devices that are surely and steadily transforming mobile communications, sensors, and 3D laser-based LiDAR imaging.

If you’re curious, learn of the variety of LiDAR uses—from airborne mapping of a forgotten Mayan city in the Guatemalan jungle to tracking water vapor and measuring windflow, or snow depth to predict avalanches. Then, take a short primer on how LiDAR works; to learn more, consult a National Academies Press review and a LiDAR calculator on imaging resolution.

Next, go one step closer to a LiDAR chip with a version that fits in the palm of your hand: the FaceID feature in the iPhone X smartphone, based on a Time-of-Flight (ToF) sensor. What new ethics questions in biometrics does FaceID raise? Dive into augmented imaging with a technical article on Microsoft’s Kinect 3D sensor, which detects user gestures to create a holographic mixed reality via a HoloLens display. If you feel ambitious, challenge yourself with articles on LiDAR calibration errors and complex imaging techniques.

This scientific article reviews ToF for measuring distance—aka ranging—by laser pulses. An academic blog post and tutorial by Adafruit show how to run a ToF LiDAR sensor; in these low speed sensors, an infrared laser is switched on/off to create pulses. As this direct modulation rate increases (for higher depth resolution or sampling rate), laser instabilities such as timing jitter and frequency chirp limit ToF precision. For a laser pulse of less than half a nanosecond, an external Mach-Zehnder modulator creates stable signals; with integrated photonics, a low power/high speed version can enhance ToF LiDAR. Review the century-old concept of a M-Z interferometer: measuring the phase difference between two waves as a change in light intensity.

When a light pulse reflects from an object’s surface, the Avalanche PhotoDiode (learn more here) enables faster detection to overcome the conventional relation between pulse duration and depth resolution. Articles by Toyota researchers and academics describe how APD designs achieve single photon counting in precise measurements, and rely on a pixel camera layout to rapidly map 3D objects. New innovations in LiDAR include filters to improve signal-to-noise ratio, and slow modulation or random modulation of laser light—as opposed to pulsing—to increase ranging accuracy and lower laser power. Finally, learn more on laser beam steering and how an integrated photonic chip does this with a bank of phase modulators.

The verdict is still out on how comprehensive LiDAR will be as a 3D imaging tool in self-driving cars. This accessible article by Jeff Hecht thoroughly reviews automobile LiDAR and its competitors. Check out a WIRED video on autonomous cars with LiDAR in action. While the car-maker Luminar strives to design low power LiDAR and Waymo focuses on low cost production, the transition to LiDAR chips has begun with cheaper solid state LiDAR by Velodyne and Quanergy Systems. Tesla has taken a firm position against LiDAR for its cars in favor of microwave radar: this review and blog contrast the two approaches. Radar can be packaged compactly—to a point, and mmWave imaging can image comparable detail as LiDAR, but it lacks depth resolution.

In contrast to LiDAR’s ToF rendering of “3D point cloud” images, human eyes rely on stereoscopic vision to perceive depth, and this online explainer clarifies the distinction between depth of field and depth of focus. Explore how we infer depth in a Scientific American article and an online primer, and how people with monocular vision manage this feat. Take a crack at a classic paper on stereoscopic depth perception and learn recent leading insights from academia.

To learn more on integrated photonics and how the technology is transforming low power cloud computing, hyperfast wireless, smart sensing, and augmented imaging, visit the AIM Photonics Institute and its education program at MIT, AIM Photonics Academy. Then, step back to learn about similar advanced Manufacturing institutes like AIM that are transforming robotics, smart fabrics, flexible electronics, 3D printing, bio-fabrication, and other high-tech fields.

Sajan Saini is a former materials scientist and science writer. He directs the educational curriculum for AIM Photonics Academy at MIT. He has written for Coda Quarterly, MIT Ask an Engineer, Harper's Magazine, and TED-Ed. Learn about Sajan here.

Next Section »

About TED-Ed Animations

TED-Ed Animations feature the words and ideas of educators brought to life by professional animators. Are you an educator or animator interested in creating a TED-Ed Animation? Nominate yourself here »

Meet The Creators

  • Educator Sajan Saini
  • Director Igor Coric
  • Narrator Addison Anderson
  • Animator Nemanja Petrovic
  • Producer Milica Lapcevic
  • Sound Designer Nemanja Petrovic
  • Director of Production Gerta Xhelo
  • Editorial Producer Alex Rosenthal
  • Associate Producer Bethany Cutmore-Scott
  • Script Editor Eleanor Nelsen
  • Fact-Checker Brian Gutierrez

More from Inventions that Shape History