A Core Theme of Snapdragon Summit 2024
|
NEWS
|
The Snapdragon Summit 2024 kicked off with a keynote delivered from Qualcomm Chief Executive Officer (CEO) Christiano Amon, and featured many of the themes that most in the industry would have anticipated—leaps forward in compute power and efficiency, and the spread of Qualcomm’s capabilities beyond mobile devices into Artificial Intelligence (AI) Personal Computers (PCs) and automotive. However, the most interesting trend set out during the Day 1 Keynote was not about a new core, System-on-Chip (SoC), or any other hardware advancement. Rather, it set out a compelling vision for “Agentic AI,” wherein Generative Artificial Intelligence (Gen AI) plays the role of a personalized, context-aware interface between the user and all of the features and services available on the device in question.
Currently, the dominant form of User Interface (UI) in most digital devices is the touchscreen, through which the user interacts with the different apps that combine to shape the value proposition of any software-defined device, including smartphones and, increasingly, cars. The drawback of this approach is the inevitability of traipsing from one app to the next; opening, closing, and re-opening whichever apps are needed to complete any single task. The more complex the task, the greater the number of apps involved in the task, and the greater the number of these app-to-app steps the user must input through the touchscreen.
By leveraging AI to give the device understanding of the user, their preferences, and the broader context of their location and status, Agentic AI will serve as an orchestrator of the various services and applications available on the user’s device, allowing the user to enjoy services and features that are currently obscured behind a repetitive process with a simple, verbal command. Of course, to deliver Agentic AI in a cost-efficient, responsive, and privacy-securing away, energy-efficient on-device compute will be essential, tying into Qualcomm’s DNA as a supplier of connected compute for AI.
The Ideal Automotive Interface
|
IMPACT
|
If the dominant, app-defined user experience can be a source of frustration in the smartphone device, when transplanted into the automotive market, it can be a source of danger to the driver and public safety. Interfaces designed for a screen held, if necessary with both hands, a matter of inches from the user’s face simply do not transpose well to a screen located at arm’s length and visible only through the peripheral vision of a driver that ought to have the lion’s share of their attention directed toward the road environment. Indeed, the touchscreen interface has recently drawn the attention of Euro NCAP, an agency that plays a pivotal role in shaping best practices for automotive safety.
Beyond the safety challenges, the app-to-app model of user engagement can prove an even greater obfuscator of vehicle capabilities from the driver than in smartphones. As Original Equipment Manufacturers (OEMs) aggregate more and more data from connected cars, one of the most popular use cases for big data in automotive has been to try and gain a better understanding of how their drivers interact with their vehicles in practice over time. One of the major early learnings has been that there are some features that go practically unused by most drivers throughout the entire vehicle lifecycle, with Ford’s decision to withdraw automated parking features from future models a prime example of these learnings being put into practice.
In some cases, it may well be that the OEM has miscalculated, and that in spite of consumer survey responses and focus group feedback, there is not enough consumer value in these unused features, and this is the only reason that consumers have neglected them. However, it may also be that the touchscreen app-to-app interface has hidden much of the value of the features in which OEMs have invested considerable resources from the driver. An Agentic AI that not only knows the user’s preferences and the location context, but also the capabilities of the vehicle, even when distributed across a number of discrete applications, could prove the key to unlocking the full value of Software-Defined Vehicles (SDVs) to their drivers.
If there is any end market in which an intelligent copilot is clearly needed, it is surely the automotive market.
"Hello, OEM..."
|
RECOMMENDATIONS
|
A pivot away from touchscreen, app-dominated UIs would be welcomed by many automakers, as in this technology sphere, automotive will always be the follower and never the leader. For all that automakers have and are doing to accelerate their design cycles, some lag behind smartphones will always be inevitable, and the gulf between the best-in-class experiences offered by the smartphone, and the best efforts mustered by the automaker will always be noticeable to the consumer.
This is one of the reasons that automakers have pushed hard to introduce AI-based voice assistants into the vehicle, particularly when the wake word can be associated with the OEM brand, associating their brand with all of the features and applications that the voice interface opens up to the consumer, even if they have little or nothing to do with the automaker in reality. Agentic AI could take this even further, and if properly associated with the automaker through a branded wake-word, would link a personalized, context-aware experience with the automaker’s brand identity.
The greatest threat here would come from a rise in Agentic AI in other software-defined devices. In the event that Agentic AI becomes commonplace in smartphones and AI PCs, the consumer demand for consistency across all of the devices in their digital life would likely push the automaker to accept a third-party assistant serving as the primary interface between the consumer and the vehicle’s functions. Therefore, there is a short window of opportunity in which automakers, with their vehicles being so perfect for Agentic AI interfaces, could play a role as leader, rather than follower, in UI development.