AI Developers Facing Challenges
|
NEWS
|
Artificial intelligence (AI) chipsets are getting increasingly complicated to design, develop, and deploy. As enterprises are pushing for more accurate computer vision, highly personalized recommendation systems, or more natural interactions with conversational AI, developers are experimenting and testing a myriad of new deep learning techniques. These techniques, such as large language model, reinforcement learning, multimodal learning, and graph neural networks require highly specialized hardware and software expertise. Often, developers need custom hardware and software solutions to meet the performance expectations.
Singularly focused on pushing the boundaries of innovation, developers prefer not to waste time with resolving compatibility issues, integrating or optimizing their code for specific hardware, or testing every single new AI technologies out there. To address this pain point, semiconductor companies such as NVIDIA, Intel, and Qualcomm have begun focusing their efforts on providing software development tools for AI.
Efforts from Major Cloud AI Chipset Vendors
|
IMPACT
|
NVIDIA, traditionally known as a leader in gaming first and foremost, is radically shifting its business model. While gaming represents 46% of NVIDIA’s revenue today, the company believes its revenue will be driven by various emerging technologies, with AI being the most prominent of all. Over the years, the company has been steadily building up its CUDA-X Software Development Kit (SDK) for AI. CUDA-X is a collection of AI libraries, tools, and frameworks that enable developers to create AI applications optimized for its Graphics Processing Units (GPUs). This investment in AI software allows NVIDIA to orientate its business towards AI leveraging on the strengths of its GPU technology and adequately address computational demand of AI models. In recent years, NVIDIA has allocated much of its time and resources to developing industry-specific software to complement the hardware, such as DRIVE for autonomous vehicles and Isaac for robotics.
Recently, the company is also seeing the potential in offering targeted cloud-based AI solutions. The announcement of the NeMo Large Language Model (LLM) cloud service at GTC Autumn 2022 enables developers to design, train, and deploy large language models for domain-specific functions, allowing AI developers to easily create new conversational AI models used for automatic speech recognition, natural language processing, and text-to-speech synthesis. Taking things further, NVIDIA is delivering tailored solutions like BioNeMo under Clara for healthcare. BioNeMo is a large, pre-programmed language model that speeds up the process of drug discovery and protein sequencing. By providing innovative software solutions geared toward specific use cases and industries, customers can reduce time to market.
As a leader in Central Processing Unit (CPU) and Field Programmable Gated Array (FPGA), Intel is offering a wide range of computing options for AI developers across the distributed computing landscape. Not surprisingly, Intel has contributed significant investment in software companies that assist data scientists with integrating, optimizing, and executing their models across various AI hardware. Not only will this downplay pipeline workloads, but it will reduce power consumption, bandwidth needs, and associated operating costs. In addition, optimizing AI for dedicated hardware helps to lower the cost and power consumption of running AI applications. An optimized and well-integrated code could enable developers to save significant amount of cost during commercialization.
Today, Intel is widely considered the most industrious chipset vendor in the open-source community. Platforms such as OpenVINO, oneAPI, and Geti eliminate the convolution that goes into working with heterogeneous hardware. In turn, developers can simply focus on innovating their applications and accelerate the time to market.
Back in June 2022, Qualcomm announced it was strengthening its existing AI Software Development Kit (SDK) with its Qualcomm AI Stack serving a wide range of Qualcomm products, such as Arm-based Snapdragon chips. Qualcomm also partnered with Google Cloud Platform to integrate Vertex AI Neural networks Architecture Search (NAS) capability to its AI Engine Direct.
Meanwhile, AMD has upgraded its ROCm open software platform for the Machine Learning (ML) community and has doubled the number of supporting platforms for its Instinct offering. Through the acquisition of Xilinx, a market leader in FPGA, AMD has introduced two key tools: PYNQ to design programmable logic circuits and Vitis AI for AI inference on FPGA. Both tools are specifically designed for Python programmers without the knowledge of specific design tools.
High Performance Hardware Is No Longer Sufficient
|
RECOMMENDATIONS
|
These AI software—as shown by recent actions from NVIDIA, Intel, AMD, Qualcomm, and other semiconductor players—are interoperable, hardware-agnostic, easy to use, and allow developers/data scientists to devote more time and resources to design, test, and deploy their AI models and applications. To further create a much better user experience for AI developers, these companies are also building strong alliances with independent device Original Equipment Manufacturers (OEMs), Independent Software Vendors (ISVs), and System Integrators (SIs).
All the advancements above indicate that AI chipset vendors are realizing how important it is to simplify AI development processes if they are to remain competitive in the AI market. Having high performance AI hardware is no longer sufficient in democratizing AI development and adoption. Highly optimized, easy-to-use AI software is now key to the overall user experience and has become pivotal in AI chipset vendors’ effort to generate ecosystem stickiness with their user base.