LiDAR and ToF sensors: What's the difference?

Lately we all come across the term LiDAR sensor a lot in the article about the camera on new Apple devices.

 This phrase is so talked about that we forget that Augmented Reality phones can work in any way, especially with the ToF tools that take Samsung phones to new heights.

So where is the difference between LiDAR and ToF in the end? 

What is ToF?

ToF stands for Time of Flight. In fact, ToF is related to the speed of light (or even sound) to determine distance. It measures how long it takes light to leave the device, hit the object and then return, all divided into two phases that indicate the distance from the device to the object or plane.

LiDAR and ToF sensors: What's the difference? Picture 1

So all LiDAR is a type of Time of Flight, but not all ToFs are LiDAR. To put it simply, ToF means measuring optical distances, not related to LiDAR.

What is LiDAR?

LiDAR stands for Light Detection and Ranging. This technology uses a laser, or a laser beam, as well as a similar light source.

LiDAR and ToF sensors: What's the difference? Picture 2

A single LiDAR sensor can be used to measure the width of a room, but multiple LiDAR sensors can be used to create 'point clouds'. These points are used to create a 3D model of an object or to map the entire area.

While LiDAR may be new on mobile devices, the technology has been around for a while. When not installed on phones, LiDAR is used to do everything from mapping underwater environments to archaeological sites.

Difference between LiDAR and ToF

The functional difference between LiDAR and other forms of ToF is that LiDAR uses pulsed lasers to build 'cloud points', which are then used to build maps or 3D images. The ToF app creates a "depth map" based on light detection, usually via a standard RGB camera.

The advantage of ToF over LiDAR is that the ToF requires less specialized equipment to be used with small and inexpensive devices. LiDAR's benefits come from the fact that computers can easily read cloud points compared to a depth map.

The Depth API that Google created for Android works best on ToF-enabled devices, by creating a depth map and "feature point" recognition. These characteristic features are often a barrier between different light intensities, which are then used to define different planes in the medium. This essentially creates a lower resolution point cloud.

How ToF and LiDAR work with mobile AR

The depth maps and point clouds are great, and for some users it should be enough. However, for most AR applications, this data must be contextualized. Both ToF and LiDAR do this by working alongside other sensors on the mobile device. Specifically, these platforms need to understand the direction and movement of your phone.

Is LiDAR better than ToF?

To be precise, LiDAR is faster and more accurate than ToF. However, this becomes even more important with new technology applications.

For example, Google's ToF and Depth API have difficulty understanding large, low-textured planes like white walls. This can make it difficult for applications that use this method in precisely placing digital objects on certain surfaces in the physical world. Applications that use LiDAR are less likely to experience this issue.

However, applications involving environments that are structurally diverse or larger are not likely to encounter this problem. Furthermore, most mobile-based user AR apps use AR filters on the user's face or body - an app not likely to crash because of a large surface. with texture.

Why do Apple and Google use different depth sensors?

When it released LiDAR-compatible devices, Apple said it was incorporating sensors with hardware to "open up more professional workflows that support professional photo and video applications." LiDAR-compatible iPad Pro is "the world's best device for augmented reality" and the product pitches Apple measurement apps.

Google explains a lot about why their Depth API and its new line of powered devices don't use LiDAR. Working together with LiDAR keeps Android devices lighter and more affordable, in addition to having a huge advantage in accessibility.

Since Android phones are manufactured by many companies, using LiDAR will give preference to LiDAR-compatible models over other models. Furthermore, since it only requires one standard camera, the Depth API is backwards compatible with more devices.

« PREV
NEXT »