U.K.-based semiconductor Intellectual Property (IP) house Imagination Technologies recently made the headlines for securing a US$100 million investment to bolster the development of its edge Artificial Intelligence (AI) portfolio. The company has an illustrious legacy, once claiming the top position in Graphics Processing Unit (GPU) innovation, ahead of NVIDIA. In the early days of mobile platforms, Imagination’s footprint was immense: PowerVR IP was in a range of Apple products, including the first iPhone in 2007, MediaTek Systems-on-Chip (SoCs), gaming consoles, automotive systems, and more. By 2008, PowerVR graphics had been shipped in more than 100 million consumer devices, reaching 1 billion by 2013.
This success came to a grinding halt in 2017 when Apple—responsible for around half of its revenue—terminated its licensing arrangement, which initiated a period of ferment. It was snapped up by Chinese private equity shortly after, giving the story a geopolitical angle (the relationship with Apple was restored in early 2020, although likely smaller in commercial value). The final piece is last November’s company-wide 30% staff cull, blamed on export restrictions to China, and which probably factored into the need for the recent cash injection.
Before the financing deal, rumors circulated about a possible return to public markets, which would appear familiar, with British IP house Arm’s partial return to public markets 1 year ago. But where Imagination differs from Arm—and NVIDIA—is the missed AI opportunity. When NVIDIA began to address AI with its GPU portfolio, Imagination remained focused on graphics—two markets with now clearly divergent paths. Nonetheless, Imagination will now refocus its AI strategy around its GPU portfolio, which is a well-trodden path over at NVIDIA.
The recent financial injection comes with a renewed focus targeting AI workloads via its established GPU IP, which marks a pivot away from a dedicated Application-Specific Integrated Circuit (ASIC) for AI. The strategic decision here is to utilize mature GPU software stacks, which are broader in remit, rather than the fragmented, more narrowly optimized ASIC ecosystem. However, Imagination is not new to using its GPU portfolio to target AI workloads—the company talked up the applicability of its PowerVR IP for Convolutional Neural Networks (CNNs) as early as 2017—and investment in new compute libraries to achieve higher GPU utilization for AI workloads is part of today’s roadmap.
This strategy is complemented by Imagination’s leading role in the Unified Acceleration Foundation (UXL) consortium, seeking to dislodge NVIDIA’s CUDA from its monopolistic grip on the AI software developer community. UXL aims to create an open standard, cross-platform, multi-architecture accelerator programming model for application development, based on an evolution of Intel’s oneAPI initiative—a stalwart of open ecosystems. Buy-in is positive, and members include Arm, Fujitsu, Google, and Qualcomm. By easing software portability from CUDA, UXL hopes to build developers a bridge to cross NVIDIA’s moat and allow AI applications to run across heterogenous platforms, including multiple dedicated AI accelerators.
Alongside the focus on AI GPUs, Imagination has invested heavily in RISC-V IP, including the recent release of a RISC-V application processor with AI capabilities targeting consumer and industrial devices. For example, Imagination’s RISC-V IP can be found in Alibaba’s AIoT SoCs, and the company announced an edge AI course in partnership with Spanish and Chinese universities to promote the engineering talent needed to build RISC-V-based SoCs. This partnership is congruent with the long-established objective in China to promote semiconductor independence from the West, and Imagination’s longstanding commercial relationship with China. RISC-V CPUs will form an integral part of Imagination’s edge AI roadmap going forward, addressing open standard, general-purpose compute, alongside the acceleration capabilities of its GPU portfolio, in SoCs and other packages.
Imagination’s two-pronged approach to edge AI is the development of open-standards software (e.g., through UXL), and accelerated computing systems (i.e., GPUs) to address diverse AI workloads into the future, thereby positioning itself as a legitimate competitor to NVIDIA in this space. The company is promoting the notion that edge AI will benefit from the same computing principles that have been applied to large-scale cloud deployments, namely the utilization of scalable, accelerated computing methods across AI frameworks. This will allow open-standard software performance to scale as the computation performance density of its GPUs increases.
Zooming into the hardware side, the core tenet of Imagination’s strategy is the focus on programmability, acceleration, and flexibility. This fundamentally sets it apart from other edge AI players ploughing resources into domain-specific, narrowly optimized ASICs, like Neural Processing Units (NPUs), for edge AI. This contrast is exemplified by:
Thus, Imagination ventures into the promotion of GPU architectures in edge AI use cases and form factors where NPUs have become increasingly popular in addressing AI workloads. This includes mobile, client, automotive, and consumer electronics devices like wearables, where the small size and energy footprint of NPUs has long been touted as essential for deploying on-device AI. On the other hand, by going down the route of more flexible GPUs, Imagination is also less exposed to the very real possibility that a new AI model will emerge that is unsuited to today’s NPUs, which have already had to adjust to serve the emerging transformers of the Gen AI era.
The strategy to apply the more open and flexible compute successes of the cloud to edge AI, and the focus on RISC-V, may come up against several countervailing forces and issues. But this is not Imagination’s first challenge, and the company is one of a handful of surviving GPU players from the 1990s, which is no mean feat.
Paul Schell, Industry Analyst at ABI Research, is responsible for research focusing on Artificial Intelligence (AI) hardware and chipsets with the AI & Machine Learning Research Service, which sits within the Strategic Technologies team. The burgeoning activity around AI means his research covers both established players and startups developing products optimized for AI workloads.