<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=1448210&amp;fmt=gif">
COMPUTEX 2024: AI Personal Computers (PCs) On Full Display

COMPUTEX 2024: AI Personal Computers (PCs) On Full Display

October 1, 2024

With more than 80,000 visitors, this year’s COMPUTEX was, by far, the most attended in the history of the show. Traditionally, COMPUTEX is where the computer and information technology supply chain, notably Asian component and module suppliers, Original Device Manufacturers (ODMs), and Original Equipment Manufacturers (OEMs), exhibit their latest innovations to attract their customers to the latest innovation. While in the past, the show focused mainly on Personal Computers (PCs), laptops, the Internet of Things (IoT), and other consumer electronics technologies, Artificial Intelligence (AI) dominated the show this year, targeting the entire stack from cloud applications to the far edge of the infrastructure and from consumer applications to the enterprise.

Yet again, Jensen Huang, Chief Executive Officer (CEO) of NVIDIA, was the superstar at the show, making multiple appearances during several keynotes, reminding attendees that without NVIDIA Graphics Processing Unit (GPU)-based accelerated computing, without the Compute Unified Device Architecture (CUDA) development environment, and without NVIDIA NVLink, it would have been impossible for any generative AI model, including OpenAI’s ChatGPT and Meta's Llama, to enjoy the popularity they are experiencing today. He predicts a tectonic shift in the way information and intelligence will be acquired and processed moving forward, shifting from retrieval-based systems toward content generated by AI.

If this development materializes, it will further boost even more demand for accelerated computing, leaving traditional Central Processing Units (CPUs) to deal with general-purpose tasks. On a personal level, Jensen Huang highlighted to some of NVIDIA’s partners that when they come to visit him, he usually cooks for them. Meanwhile, when he goes to visit them, they invite him to good restaurants with a nice à la carte menu. This anecdote tells you a lot about the personality of the man and his drive to create and innovate, rather than just comfortably enjoying readily prepared à la carte menu items.

Here are some takeaways from the show regarding AI PCs.

Edge and Devices: “Over the TOPS” - AI PC

The AI PC was all over COMPUTEX 2024, and the major non-captive chip vendors were there to promote their silicon. They showed their latest products with a major focus on performance across the CPU, GPU, and Neural Processing Unit (NPU) in terms of Tera Operations per Second (TOPS), performance per watt, and AI workload performance for open-source Large Language Models (LLMs) like Llama-2. AI chipset performance benchmarking was central to the messaging of these suppliers. TOPS for NPUs was particularly the talk of the town, and Intel, AMD, and Qualcomm, confusingly, all demonstrated the leadership of their products with diverse benchmarks. Microsoft, ASUS, Acer, HP, Dell, and other key OEMs were invited to attend keynotes delivered by these players. They all provided similar testimonies encouraging the creation of chipsets with even more TOPS so developers can bring innovative AI applications to the marketplace. On the exhibition floor, chipset performance was also clearly demarcated as a battlefield and commercial differentiator.

Even though benchmarks are only adequate for the conditions under which the chipsets are tested, at this early stage of AI PC market development, the performance competition and the race toward more TOPS and enhanced power efficiency of chipsets is healthy, as it provides Independent Software Vendors (ISVs), OEMs, and application developers the computational bandwidth and speed they require to bring innovative, heavyweight, and computationally hungry AI applications to the market. From this perspective, it looks like the entire AI PC ecosystem is sending a strong and unanimous message: performance and power efficiency matter for AI PCs. Indeed, the entire ecosystem, including ISVs and OEMs, is backing chipset suppliers in their race for more TOPS for AI PC applications. They are united in promising a computing revolution similar to the emergence of the Internet in the early 1990s, Wi-Fi in the late 1990s, and the emergence of social networks enabled by mobile broadband in the late 2000s.

Productivity Applications Are Key to Pushing PC Innovation

To sustain the current PC revolution, the industry needs to move the discussion beyond just more TOPS to care about the user experience and to enable developers to build innovative applications capable of boosting the end user’s creativity and productivity. The current state of on-device AI applications shown in demos from chip vendors and OEMs is still largely immature. These applications are still at the Proof of Concept (PoC) level. Most players are using stable diffusion models to demonstrate AI capabilities; for instance, to be able to understand dynamic prompts and translate these prompts into meaningful and creative images in real time. Players have also demonstrated AI applications able to understand scenes from imaging or video content, translating these scenes into descriptive text-based narratives. However, at this level, it is still not obvious how these applications can help users enhance their creativity or productivity in real everyday life.

So, genuine productivity-enhancing tools will likely begin to emerge once Microsoft’s Copilot+ scales, potentially sometime in 2025 or 2026. Always-on semantic indexing utilizing the NPU and developer access to the Copilot+ runtime will help spur better application development for Windows AI PCs.

It is worth noting that the AI PC definition is still hazy, and all vendors use this term to market their hardware, in a joint effort with Microsoft. Copilot has become Copilot+, and Microsoft CEO Satya Nadella's similar message to all players keynoting at the show revealed more than the sum of its equal parts—the world's top ISV is hedging its bets, because, as of yet, there is no clear winner in sight. On the other hand, NVIDIA's claim to have invented the AI PC years ago before the term was coined is backed up by its installed base of performant discrete/add-on graphics cards in premium PCs and workstations.

CUDA's maturity, and the ability to run diverse applications either in the cloud or entirely on-device via Network Interface Modules (NIMs) sets NVIDIA's application ecosystem apart from the rest of today's AI PC market. However, given the price of compatible RTX GPUs, this will remain in the high-end consumer and professional segments—particularly in creative industries for content creation and design software.

Semiconductor Companies Targeting AI Computers

In terms of competition, Qualcomm presents itself as an aggressive challenger in the AI PC market. The company leverages its heritage in mobile Systems-on-Chip (SoCs) and NPUs to create an AI accelerator for its Snapdragon X platform that is powerful enough to pass the Copilot+ test. Given chip development times, this was more luck than strategy, as Microsoft's Copilot and AI PC messaging has shifted in the last year to require more TOPS per watt, which seems to give a competitive advantage to players with a competitive NPU offering.

Qualcomm is the only Arm-based silicon competitor that could potentially enable Microsoft to effectively compete with Apple's success with its Arm-based M-series chipsets. If we consider the Copilot+ compatibility to be the AI PC Holy Grail, Qualcomm has done well, as both its X Elite and X Plus pass Microsoft's 40 TOPS test. Lacking from Qualcomm was a clear roadmap with information about the performance and release of future silicon for the X platform, and we were told to wait until the Snapdragon Summit in October to hear more. This will not please enterprise buyers looking to manage their PCs' lifecycles with advance notice. However, Qualcomm's application developer environment, Qualcomm AI Hub, for running AI applications on its heterogeneous systems, has been expanded to include PCs, and an upcoming developer kit could help it to catch up to its more established Windows PC counterparts, AMD and Intel.

Intel has a broader offering, and the first-mover Meteor Lake chipsets, which are already available and have shipped over 8 million units, are in line with its established position as the dominant force in the Windows PC market. ABI Research anticipates shipments of AI PCs to reach 54 million by the end of 2024, and the largest share is likely to come from Intel.

The Meteor Lake family of silicon does not meet Microsoft's Copilot+ performance requirements yet, which appears as though it was blindsided, but this misfortune is a consequence of the development time of commercial PC hardware and the recently established minimum performance requirements. However, the upcoming Lunar Lake laptop systems will meet the Copilot+ requirements, and Intel's comprehensive roadmap will reassure enterprise customers. Energy efficiency has been a key objective for Lunar Lake, as Intel is conscious of the lower power RISC-based Arm offering from Qualcomm, and Apple's comparable in-house designs. Plenty of AI workloads will be able to run on Meteor Lake systems, and these will eventually form part of the more mass market offering, alongside the Lunar Lake hardware for the premium segment—Meteor Lake will not disappear after Lunar Lake enters the market in 3Q 2024. With 80 notebook designs from over 21 OEMs, Microsoft knows that it needs Intel to scale AI PCs.

AMD was also actively promoting its AI PC offering and the next generation “Strix Point” Ryzen AI 300 will be available from July, following closely on the heels of Qualcomm's June release of the Snapdragon X platform. AMD leveraged its Windows PC legacy to reach scale with 100 design wins, and the promotion of software development on its platform as the vendor expects to reach over 160 ISVs by the end of the 2024. Both AI 300 systems set to launch next month meet the Copilot+ requirements, and the number of design wins points to a successful launch for the company, beating Intel to becoming the first x86 Copilot+ vendor. AMD's expectation of increasing its data center CPU market share may well be replicated in the PC market.

Stay tuned for a second article from the COMPUTEX 2024 show—this time focusing on NVIDIA’s approach to catering to growing data center/cloud requirements.

Tags: AI & Machine Learning

Paul Schell

Written by Paul Schell

Recent Posts