Edge AI Solution from Microsoft
|
NEWS
|
At Ignite 2021, Microsoft announced Azure Percept, an end-to-end edge Artificial Intelligence (AI) platform that includes hardware accelerators integrated with Azure AI and Internet of Things (IoT) services. Azure Percept is able to run deep learning-based machine vision workloads at the edge without connection to the cloud. Designed as an extension of Microsoft’s cloud services, the solution works out of the box with various Azure services, including Azure Cognitive Services, Azure Machine Learning, and Azure Live Video Analytics for real-time AI processing outside of a public cloud environment. Once deployed, developers can rely on Azure Percept Studio to manage edge AI models.
To support AI processing at the edge, the Azure Percept Development Kit features NXP iMX8m processor. Its camera-enabled solution, Azure Percept Vision, is equipped with an additional Red, Green, Blue (RGB) camera sensor and Intel Movidius Myriad X, an edge AI chipset. In order to prevent unauthorized access and malicious manipulation of the AI model, the kit also comes with a hardware root of trust, which interacts with a Trusted Platform Module (TPM) and attestation service to perform device validation and provide hardware-based protection.
Rejoice, Robotics Developers
|
IMPACT
|
Azure Percept is designed to handle a wide range of deep learning-based machine vision and audio use cases, including parcel tracking in warehouse and logistics, quality inspection on production lines, and surveillance and monitoring on urban infrastructure. However, ABI Research believes that the use case that will benefit the most from the launch of Azure Percept is robotics, specifically mobile robotics. In the past, Microsoft has commanded a market leadership position in computer vision and robotics development, alongside Intel RealSense, due to its popular Kinect product line. First released in 2010, Microsoft Kinect is a line of motion sensing input devices produced by Microsoft. Kinect was originally planned to fully replace the game controller from Microsoft's Xbox video game hardware. That attempt failed, but Kinect found unexpected popularity among robotics enthusiast and academia due to its robustness and cost efficiency.
Armed with a dedicated edge AI chipset, Azure Percept presents itself as a significant upgrade to Kinect. Its seamless integration with Azure services in the cloud ensures robotics developers can benefit from software and tools that are designed for edge AI operationalization, including sensor fusion for Simultaneous Localization and Mapping (SLAM), object detection, image classification, speech recognition, and voice control. Having an edge AI solution on robots and drones reduces latency and response time, thereby creating a safer, highly precise, and autonomous robotics operation. This will greatly improve the proliferation of mobile robots, in which a total of 5 billion units are expected to ship in 2030 according to ABI Research’s Commercial and Industrial Robotics market data (MD-CIROBO-105). Coupled with Microsoft’s support for ROS, this is a strong sign that Microsoft is aiming to expand its influence in the robotics industry.
Better Late than Never
|
RECOMMENDATIONS
|
In recent years, cloud service providers have been actively expanding into the edge domain, namely on-premises servers, gateways, devices, and sensors. These moves by cloud vendors not only attract developers that are relying on services offered by them, but also serve as an extension for their solutions in the edge AI market. AI solutions deployed within this environment can benefit from the same scale and flexibility offered by the public cloud.
For example, Amazon Web Services (AWS) is well-known for SageMaker, its cloud AI development platform. However, AWS recently launched several edge-based hardware solutions, including Snowcone and Snowball, as well as AWS Panorama. These hardware offerings support AWS software and services, while located outside of AWS cloud data centers. Developers may find this very useful when used in conjunction with the edge version of SageMaker, known as SageMaker Neo.
Another good example is Google. As a leader in AI, Google is known for its TensorFlow AI framework and Tensor Processing Unit (TPU) designed to work on the Google Cloud Platform. In 2018, Google launched its Coral Edge TPU, which is a complete platform for accelerating neural networks on embedded devices. It is noteworthy that Google has been a strong supporter of embedded AI, as its open source TensorFlow Lite for Microcontroller has been the de facto AI framework for edge AI. Other cloud vendors, such as Baidu and Huawei, have also launched their own edge AI chipsets for end devices.
The launch of Azure Percept is Microsoft’s answer to this ongoing trend. Microsoft is trying to facilitate the development of edge AI on its Azure services by offering a zero-code development and deployment platform with cloud training service for machine learning models. Leveraging high-performance edge AI chipset from Intel and NXP, developers can start with a library of pre-built AI models for vision capabilities, including object detection, shelf analytics, vehicle analytics, and audio capabilities like voice control and anomaly detection.
The edge AI market is huge, as the installed base of devices with edge AI chipset is expected to exceed 3 billion by 2025, according to ABI Research’s Artificial Intelligence and Machine Learning market data (MD-AIML-106). To compete with other solution providers in the edge market, ABI Research believes that cloud service providers should continue to refine and strengthen their edge-to-cloud portfolios and encourage their developer communities to explore edge AI deployments. At the end of the day, technology vendors that can produce a clear product roadmap, support interoperability, and demonstrate domain expertise will be the ones to influence the future of the edge AI market.