The Role of LiDAR Scanner and Time of Flight (ToF) Cameras in Mobile AR Experiences
|
NEWS
|
Simultaneous Localization and Mapping (SLAM) has been recognized as one of the most essential elements for enabling Augmented Reality (AR) experiences. More specifically, SLAM enables devices to orient themselves and map unknown environments without any external location reference or tracking technologies (like GPS). Consequently, devices are able to understand a user’s environment contextually and sometimes semantically (identifying specific objects or environments) and precisely overlay digital content. The use of SLAM in AR experiences so far relies on a device’s camera in combination with Inertial Measurement Unit (IMU) data (accelerometer, gyroscope), which is called Visual-Inertial Odometry (VIO). However, the main challenge of visual SLAM is that it can lack in precise distance measurement and depth recognition, so virtual content may float in the space and be unrealistic, with applications that require accuracy suffering as a result.
Another sensor used for SLAM, which is used in autonomous cars and robots that require high accuracy, is the Light Detection and Ranging (LiDAR) sensor, which uses a laser sensor in combination with IMU. LiDAR systems work by sending out laser pulses in a direction and measuring the time it takes for the light to be reflected and return to the sender. This approach is significantly faster and more accurate than other sensor types and mapping methods, but more expensive and difficult to implement—especially for consumer-focused devices.
Apple introduced a LiDAR scanner into the latest iPad Pro for the first time and it is anticipated that it will be introduced to the next generation of iPhone as well. There is no doubt that LiDAR will unlock opportunities for new AR use cases and, at the same time, bring optimized AR experiences to the masses—before the establishment of AR/Mixed Reality (MR) smart glasses in the consumer space. Apart from Apple, other key players in the smartphone industry such as Samsung, Huawei, and Oppo, have already introduced Time-of-Flight (ToF) cameras in the past to improve both traditional camera usage and AR use cases; LiDAR measures distance up to 5 meters while ToF is limited to 2 meters. Also, LiDAR uses continuous scanning to capture elements of space point by point, while ToF cameras use an all at once pulse of infrared light to capture the entire space. Consequently, LiDAR sensors can be faster and scan with higher resolution than ToF, with the tradeoff being generally higher cost and more difficult implementation. ToF cameras can struggle with bright environments and open spaces, which LiDAR can handle better.
AR/MR Use Cases Improved with LiDAR
|
IMPACT
|
The introduction of LiDAR will be a key driver for the maturity and establishment of AR use cases that require precise spatial understanding beyond AR remote assistance, which is the current killer app for smartphones and tablets in enterprise. For instance, due to the higher accuracy, enhanced environmental and depth understanding, and instant placement of virtual objects, businesses can utilize tablets/smartphones for product/3D model design and visualization use cases. Also, more capable mobile devices will better support the digital twin data visualization especially popular in industrial applications, which overlay digital elements more precisely over surfaces/machines. Moreover, AR product visualization apps will be more accurate and reliable and will eventually accelerate product purchases through AR apps.
In addition, it is expected that LiDAR will boost AR indoor navigation apps, such as customer navigation in shopping malls/in-store, or worker navigation in warehouses/factory floors where external location reference is unavailable. At the same time, in combination with Artificial Intelligence (AI) algorithms LiDAR will improve the accuracy of mapping data and some traditional GPS-based apps. Building 3D maps/point clouds of an environment will help AR cloud-based applications grow and mature, leveraging this higher resolution point cloud data for more efficient and higher quality mapping.
All in all, AR/MR applications will benefit from LiDAR in a few key ways: precise spatial understanding, alleviating visual oddities inefficiencies (such as floating objects, more realistic/immersive content), achieving better digital object occlusion, and improving overall User Experience (UX) while at the same moving from single use case to multi use case implementations. Software improvements will also play an essential role in allowing LiDAR to reach full capabilities and encourage developers to deploy new AR applications, just as we have seen with traditional cameras and VIO being consistently improved through software and AI/Machine Learning (ML).
LiDAR Scanner Increases the Competition between Mobile AR and HMDs
|
RECOMMENDATIONS
|
In accordance with ABI Research’s latest Augmented and Mixed Reality Market Data: Devices, Use Cases, Verticals, and Value Chain (MD-ARMR-103), currently the 79% of AR/MR applications run on smartphones/tablets with the remaining usage on Head-Mounted Devices (HMDs). ABI Research anticipates that, despite the technological advancements of AR/MR smart glasses, mobile devices will continue to lead in AR usage over the next three to four years. Smartphones and tablets are low-cost devices in comparison with HMDs and can reach quicker Return on Investment (ROI), which is one of the primary concerns when businesses decide to invest in AR/VR. At the same time, smartphones/tablets can be integrated with existing device management systems or other enterprise Information Technology (IT) systems more easily.
The introduction of LiDAR in the latest iPad Pro and potentially in the upcoming iPhone will foster the capabilities of smartphones/tablets in comparison with AR/MR headsets and will increase the competition, especially if next-generation devices like smartphones and tablets will be 5G ready. Hands-free control will remain the primary advantage point of HMDs over smartphones, and HMD manufacturers can target improving User Interface (UI) and enriching input methods (eye tracking/gesture control) in order to remain competitive and highlight user flexibility and natural freedom of movement. Moreover, another key feature of HMDs is that devices with wide Field of View (FOV)—more than 50 degrees—will have an advantage over technically capable tablets/smartphones in use cases that require high immersion (gaming and entertainment, product design, data visualization). At the same time, transparent displays will remain a unique feature of HMDs that cannot be replaced by tablets/smartphones.
Currently, given the high price of high end devices (such as HoloLens 2) and inefficiencies in lower-priced devices in combination with limited usage of more complex AR use cases (such as product design), LiDAR will give a competitive advantage in smartphones/tablets and close the gap between HMDs and mobile devices when it comes to tracking and total functionality.
All in all, the decision about investing in an AR/MR HMD or a capable smartphone/tablet strongly depends on the nature and the needs of the targeted use case and user/business budget. Given the technological advancements, both types of devices are capable of supporting many high value AR/MR use cases, with hands-free increasingly being the primary differentiator between the device types. This presents a more compelling market in which to invest, as hardware presents less a decision-making challenge and more an opportunity.