Embedded World Announcements, Conversations, and Demos Highlight a Burgeoning Embedded ML Software Ecosystem
|
NEWS
|
For embedded Machine Learning (ML) chip vendors, the main problem that they are trying to solve is how they can effectively accelerate chipset deployments at scale. Hence, it was unsurprising that although Embedded World saw hardware innovation from different players, especially around Microprocessor Units (MPUs) and Neural Processing Units (NPUs), it was noticeable that each announcement, conversation, and demo either included or even focused on new software capabilities, as vendors look to reduce developer barriers to entry:
- STMicroelectronics demonstrated both NanoEdgeAIStudio and STMCube32, while showcasing numerous partnerships with Amazon Web Services (AWS) SageMaker, Nota AI, and other Independent Software Vendors (ISVs).
- NXP highlighted a partnership with NVIDIA to integrate the TAO Toolkit into eIQ platform, giving developers access to tools and pre-trained model library (NGC Catalog).
- Synaptics announced Astra (alongside the new SL Series of embedded Machine Learning (ML) Internet of Things (IoT) processors), a compute platform that supports AI deployment on IoT edge devices.
- Renesas showcases the EdgeCortix partnership, including MERA, a heterogenous compiler framework; a dynamic neural accelerator; and SAKURA, an AI co-processor device.
- Mouser Electronics announces partnership with Edge Impulse to enable developers to deploy ML optimized to compatible microcontrollers.
- Infineon showcases an ML use case that combines a Microcontroller Unit (MCU) and microphone, and Imagimob (a recently acquired edge ML toolchain company).
Embedded Chip Vendors Will Continue to Build Out Their Software Stack
|
IMPACT
|
There are numerous reasons why software dominated the ML narrative. Hardware innovation continues to slow, and the market expects fewer revolutionary announcements. RISC-V has lowered barriers to entry for embedded ML, increasing market saturation and fragmentation, and forcing incumbents to seek new ways to build differentiation in a mostly homogenous product space. And finally, developers are becoming more cost-sensitive, requiring cheaper access to resources before starting ML experimentation.
Moving forward, software R&D will only increase. Some of the areas ABI Research expects vendors to focus on are:
- Extending Developer Clouds to Provide Real-World Resources to Test and Benchmark ML Models: Increasingly, developer clouds are becoming table stakes for embedded vendors as they look to lower developer barriers and accelerate experimentation. Hardware vendors should expand developer clouds to include realistic user scenario testing environments, as this enables developers to test models across peak, typical, and worst-case usage patterns to assess models against Key Performance Indicators (KPIs).
- Automated Machine Learning (AutoML): ML model development environments remain largely manual, e.g., data annotation, model benchmarking, and model building are reliant on supervised training runs. Supporting embedded ML at scale requires automation across the entire ML development process.
- Curated Datasets: Embedded ML often targets vision and audio use cases in uncarpeted verticals like industrials and logistics. One of the primary challenges in these verticals is data availability to customize pre-trained ML models. For example, a manufacturing company that is trying to develop vision-based predictive maintenance for screw degradation will not have sufficient “real” data to customize pre-trained models for this use case. Of course, synthetic data can play a role, but increasingly, chip vendors must provide developers with access to pre-curated datasets to accelerate Proofs of Concept (PoCs) and eliminate time-intensive operations like data collection and curation.
- Transitioning from Model Zoos to Ready Models: Model zoos, either developed in-house or through third-party partnerships, are increasingly table stakes for chip vendors. However, taking pre-trained models from zoos to production is still challenging as it requires data operations and model training, which is time and resource intensive. Imagimob has developed ready models that are “production ready” for specific ML applications like siren or gesture detection. By eliminating time-consuming operations, production models can eliminate months, if not years, from ML PoCs, accelerating time-to-scale.
Chip vendors should innovate to reduce developer friction (cost, resources, time) and accelerate the deployment of ML at huge scale—this is the key to profit in this low-margin, high-volume business.
Hyperscaler Partnerships Will Help Expand the ML Ecosystem, but R&D and Acquisitions Will Also Be Important to Differentiate Solutions against the Wave of RISC-V Entrants
|
RECOMMENDATIONS
|
As embedded chip vendors look to build out their ML software proposition, they will encounter sizable human capital and developer capacity challenges. Partnering with hyperscalers can help accelerate software ecosystem buildout and act as a channel partner. At Embedded World, STMicroelectronics showcased its integration with AWS SageMaker and AWS IoT Core. This integration makes sense for a variety of commercial and technical reasons:
- Integrating with SageMaker enables developers to continue to use tools that they are familiar with.
- Developers can access range of ML ecosystem tools and models through existing AWS platform integrations (e.g., GitHub or Hugging Face).
- Access and integrate adjacent tools like AWS’ IoT Core or Azure IoT into products/services.
Partnering with hyperscalers will rapidly grow the embedded ML ecosystem and will certainly help established incumbents like NXP or Infineon (and new entrants) build End-to-End (E2E) ML solutions. However, RISC-V has massively lowered barriers to entry in the embedded space and increasingly they are deploying competitive solutions. Incumbents must look to build further product differentiation through proprietary Research and Development (R&D). Given the talent challenges, R&D is best approached through startup acquisition.
Established incumbents should look to acquire company Intellectual Property (IP) and talent, and then provide funding to support innovation. In the majority of cases, acquired startups should not be fully integrated, but should operate separately with appropriate funding and strategic cooperation, because one of the key risks with integration is that R&D and decision-making is slowed through corporate processes. Eventually, effective startup acquisitions can be integrated into the company and form the basis of new products or services. Intel is an example of an established incumbent that has leveraged Mergers and Acquisitions (M&A) effectively to support product development—the acquisition of Habana Labs to develop Gaudi accelerators, and the acquisition of cnvrg.io and Granulate to form the basis of the Tiber Enterprise Platform. But even Intel has not always been successful, as corporate decision-making and conflicts of interest can often hinder success for acquired business units. Altera’s acquisition, integration, and spin-out (9 years later) as an Intel-owned separate entity is a prime example. Established incumbents like NXP must ensure effective management of acquired-companies to minimize R&D and commercial disruption, while maximizing synergies and value creation.
One of the key areas that ABI Research recommends embedded ML chip vendors focus on is optimization. Even mature players like STMicroelectronics still rely on third-party, open-source tools like TensorFlow or limited technology partnerships like Nota AI. Going forward, this will not be sufficient, and vendors must look to bring proprietary optimization tools in-house to continue to develop a strong, differentiated value proposition. One note of caution is that as the AI market value expands quickly, optimization startup price tags will increase aggressively, so ABI Research recommends that chip vendors move quickly to acquire companies (IP and talent) and provide them with the resources to invest and innovate internally.