Brownfield AI Deployment Requires a Different Approach
|
NEWS
|
At the moment, there are a lot of discussions in the Artificial Intelligence (AI) industry about the merits of dedicated AI hardware for inference workloads at the edge, namely on device, gateway, and on-premises servers. A previous ABI Insight, System-on-Chip versus Discrete AI Chipset: Bringing Artificial Intelligence Beyond Mobile Devices (IN-5694), thoroughly explored the benefits of dedicated AI hardware, as it minimizes resource overhead and operation complexity while bringing cost efficiency. This is particularly important for AI devices that are moving from the Research and Development (R&D) phase to large-scale deployment. Most AI workloads nowadays are narrow AI that focus on a singular task and do not require a juggernaut type chipset that offers a wide range of capabilities.
However, the aforementioned benefits only apply to greenfield deployment. Brownfield AI deployment is a very different scenario. In brownfield environments, many existing legacy devices often feature decent computational capabilities, such as embedded Microcontroller (MCU), Field-Programmable Gate Array (FPGA) and even server grade Central Processing Units (CPU) in some cases and have long replacement cycle. This means industrial AI models need to work on legacy equipment, instead of the latest and most powerful AI chipset.
Startups to Take the Lead
|
IMPACT
|
As such, a new paradigm is required to facilitate AI model deployment in brownfield settings. Instead of developing a hardware-specific AI model that runs efficiently on a specific architecture, developers need to have tools that allow them to develop hardware-agnostic AI models that can be supported by legacy infrastructure. AI developed using these Software Development Kits (SDKs) must be able to fulfill the following requirements:
- Similar Performance on Distributed Edge Computing Architecture: The AI model needs to be able to run on distributed devices, servers, or gateways away from data centers or cloud servers.
- Fully Hardware Agnostics: The AI models must be capable of delivering similar performance on all major computing platforms, including lntel, AMD or ARM architectures.
- Seamless and Secure Integration: The AI model can be easily integrated with simple Representational State Transfer (REST) Application Programming Interfaces (APIs) and support for configurable data integrated with enterprise authentication technologies through standards-based methods such as Security Assertion Markup Language (SAML).
At the moment, several startups have been actively working to offer such solutions. Falkonry, a US$11 million six-year-old startup, for example, offers Falkonry Edge Analyzer, a portable containerized software that can create and deploy predictive analytics models on all major computing architectures. Developers only need to train and generate the software with time-series data on the Falkonry LRS platform, which itself can run on both on-premises as well as in the cloud. Falkonry LRS is also natively compatible with Siemens XHQ software, which will facilitate backward compatibility with existing hardware investments.
Similarly, Neurala, a US$15 million startup, offers Brain Builder, an all-in-one platform for data tagging, training, deployment, and analysis. Based on the company’s proprietary Lifelong-DNN model, Neurala’s Brain Builder can develop edge AI faster with less data used and less time spent on the training process. Once the training process is completed, the AI model can be tested and prototyped on iOS or Android app, or deployed directly in the cloud or on edge devices.
FAIM, on the other hand, is a startup that offers to help manufacturers build AI solution based on its proprietary Fractal Artificial Intelligence Model (FAIM) hosted in the cloud. Manufacturers can upload training data via the provided API and services and receive prediction results through a publisher/subscribe messaging architecture. FAIM claims its model computes leanly and efficiently and is able to analyze large spacetime datasets without the need for dedicated AI hardware.
Large Market Potential
|
RECOMMENDATIONS
|
We at ABI Research forecast that the total installed base of AI-enabled devices in industrial manufacturing will reach 15.4 million in 2024, with a Compound Annual Growth Rate (CAGR) of 64.8% from 2019 to 2024. However, it is important to highlight that the implementation of AI in industrial manufacturing has not been as seamless as was expected by the industry, despite the wealth of solutions and data in the manufacturing environment. Much of this is due to the early success of compute-heavy Machine Learning (ML) models that require dedicated hardware support. As the industry slowly identifies lean and hardware-agnostic ways to deploy AI, more AI applications will be able to be developed and deployed in legacy industrial equipment. For greenfield deployment, however, the preference for hardware-focused and software-focused AI will be more balanced and strictly dependent on use cases.