Strategic Advantages of Generative AI Deployment at the Edge versus in the Cloud
|
NEWS
|
Running generative AI models at the edge is the fastest way to provide responses to questions, along with extracting data for model training. Generative AI deployed at the edge can use real-time datapoints from factory floor machines to provide insights into predictive maintenance and product quality. In contrast, taking generative AI to the cloud offers larger latency periods from data extraction to response time, but enables the model to train on larger datasets with significantly enhanced computational power, which leads to more comprehensive AI. Through the use of the cloud, generative AI can transition from simplistic functions, such as predictive maintenance, to more labor-intensive workloads like part nesting and process sequencing.
Generative AI Deployment Is Better Suited for the Cloud
|
IMPACT
|
Current use cases for generative AI in manufacturing that leverage the cloud are generative design modeling, part consolidation to reduce Manufacturing Bill of Materials (MBOM), and advanced Computational Fluid Dynamics (CFD) simulation. Future use cases of generative AI in the cloud are assembly line creation and optimization. These use cases all require massive quantities of input data, along with computational capacity not available at the edge. In order for generative AI to run thousands of simulations and potential reworked designs requires the power of the cloud. While generative AI at the edge specializes in time-sensitive responses with real-time data, they are significantly limited by the workload capacity.
Although generative AI in the cloud is considered the more comprehensive deployment method, this does not eliminate the need of generative AI at the edge. If simplistic recommendations regarding product quality and machine operability are all that is required, edge deployment would provide the fastest results. Generative AI deployed at the edge is generally cheaper and manufacturers looking to adopt this deployment method will have to balance cost with integrity. For Small and Medium Businesses (SMBs), this balance will dictate the go-to-market strategy and solution providers should capitalize on this by providing both edge and cloud solutions for the differentiating scale of the manufacturing market. The cost/integrity balance will not be as impactful for larger manufacturers looking to use generative AI to its full potential and overhaul significant aspects of their manufacturing operations due to the lack of computational power at the edge. Stratasys’ partnership with generative design company nTopology is an example of solution providers partnering with specialized cloud-based companies to leverage the capabilities of generative AI in the cloud to produce designs that require extensive compute power for large-scale manufacturers to explore.
Custom Generative AI in the Cloud Provides the Most Comprehensive Solution for Manufacturers
|
RECOMMENDATIONS
|
Manufacturers see the inherent risks with generative AI, such as incomplete, incorrect, and irrelevant datasets training the models, which can lead to biased outcomes. Snowflake, a cloud service provider, partnered with NVIDIA and its generative AI modeling system NeMo to enable manufacturers to develop and train AI models on a custom level with company data. Deployed on Snowflake’s Manufacturing Data Cloud, NeMo allows manufacturers to use the architectural framework of its generative AI so custom models can be deployed with specific functionality. Custom-trained generative AI circumvents risks, such as incomplete and irrelevant datasets, by performing only based upon given company data. Additionally, with enhancements to NeMo’s Graphics Processing Unit (GPU) performance, along with using Three-Dimensional (3D) parallelism for neural network refinement, latency periods from operating in the cloud are slowly decreasing.
Bard, ChatGPT, and IBM’s Watsonx proved the power of LLMs and their potential use cases as assistants and content creators; however, manufacturers need more customizable generative AI solutions that work at every scale. With manufacturers requiring precise generative AI models to serve industry-specific use cases, the additional computational power of the cloud fills this market need. Manufacturers will start creating personalized applications that run on generative AI for new insights into product design, operations, and production.
Surface-level use cases involving generative AI capabilities to answer human language inputs have been explored at the edge; however, use cases with larger scopes have not been implemented. Cloud-based generative AI that has been built out and trained on company data must be a priority for manufacturing platform providers. Using the scale of the cloud, purpose-built generative AI applications will redefine how manufacturers can interact with data and maximize operational efficiency. ABI Research has compiled a list of generative AI use cases for manufacturing that can be developed from custom cloud-based generative AI (see ABI Research’s Generative AI Use Cases in Manufacturing (PT-2763)). Below are some of the use cases:
- Generating a new design with fewer parts (consolidation)
- Generative scenario analyses (simulation) to improve product performance
- Tool path optimization
- Drafting step-by-step manufacturing work instructions for new products and production processes
- Demand and supply forecasting for warehouse inventory stock and purchasing period management