The Show Is Back and Everyone Is Here, Even in Hologram Form
|
NEWS
|
The crowds were back at London Tech Week, a week-long international event that took place in the United Kingdom last week, and which boasted 8 anchor events, more than 300 speakers, including well-known politicians like Hillary Clinton and the current Ukrainian president, Volodymyr Zelensky (the latter in the form of a 3D hologram), and more than 20,000 visitors. The main events took place around Westminster, the heart of central London, and tucked away in the now fashionable East London was the AI Summit London. Attended by many of the main players in the market, from Intel and IBM to Fujitsu and AMD Xilinx (NVIDIA was surprisingly absent), as well as by government agencies and a wide array of startups, many of these focused on data manipulation (e.g., Labelbox and Domino Data Lab) and natural language understanding (e.g., Deepgram and Kore.ai). The event showcased what is most exciting about AI—the investment, the innovation, the possibilities!—and what is not in such good health—the shortage of data scientists, the sometimes narrow view of AI practitioners, etc.
Perennial Skills Shortages Require Innovation
|
IMPACT
|
Spread over two days, the first day saw some of the heavy hitters in action, and was certainly the busiest and better attended of the two. Intel, for instance, put forward its vision of a future in which AI will be everywhere, a scenario that involves using software as a conduit of sorts from data and hardware to AI developers and users. The main talk of the day, however, orbited around the prospect of an ethical and responsible AI, with various government and official agencies taking the lead in the conversation, from the U.K. Ministry of Defence and NATO to the Bank of England, with IBM and others pitching in (an Ethics in AI workshop, however, was surprisingly, and unfortunately, invitation only). The AI Summit was organized alongside the Quantum Computing Summit, and the major events from this part of the overall gathering also took place on Day 1. Fujitsu brought along one of its supercomputers, and there was major news coming from the U.K. business secretary, Kwasi Kwarteng, who announced that the U.K. government will heavily invest in quantum computing as part of a 3-year, US$48 billion project.
Day 2 was a much quieter affair, allowing for better networking, and for a more focused approach to the talks; among the most significant were sessions on how to scale up cloud AI, federated learning, and edge AI, topics that ABI Research has discussed significantly recently. In this vein, it is worth mentioning the connection to robotics, as there were a few talks on how AI learning can help robots become more autonomous in the warehouse, especially by improving the picking and gripping actions so common in such a setting, another topic ABI Research has broached. Perhaps more tellingly, however, was a perceptible trend toward software solutions, many of these open source in nature—Red Hat provided some interesting demonstrations in this respect—and mostly meant to accelerate the kind of AI development that would more appropriately be solved by data scientists and AI engineers. Though this is not a shortcoming per se—indeed, these solutions are very useful for enterprises—it does highlight the perennial problem of the shortage of advanced skills in AI (and robotics). In this sense, it did not go unnoticed that education and training featured heavily at the Summit. There were various workshops on these issues and even some hands-on tutorials from big players, with the “Hackathon” and “Into the Den” competitions putting their imprint on these endeavors. In the former event, data scientists and developers were put to the test on a number of scalable AI solutions, while during the latter, vendors and their products faced the music in the form of the grilling they received from the visiting public.
More Diversity in Approach May Be Needed
|
RECOMMENDATIONS
|
The AI Summit London may well be one of the most diverse events ABI Research has attended, at least in terms of the background of the experts and visitors. Less diverse are the solutions AI developers are proposing, almost all of which involve Machine Learning (ML). Indeed, the commercial AI space is almost completely dominated by one kind of ML, namely, Deep Learning (DL) techniques, even though these methods are not always applied to problems that involve any intelligent task. As statistical correlation machines, ML/DL models can be applied to pretty much anything that requires recognizing some kind of pattern. Some of the most famous examples of AI milestones, especially in highly specialized tasks, such as the games of chess and Go, involve hybrid AI algorithms, systems wherein DL processes are augmented with symbolic representations, the latter meant to function as memory pointers, which is a particular issue with which ML models have struggled. This was disappointingly absent from the proceedings, but ABI Research believes that there is much promise in hybrid AI models, as symbolic representations can help in areas that DL may not be able to model, and for intrinsic reasons, such as memory, contextual information, and flexibility. The startup uptake of these models is very limited (Robust.AI is an exception), but it ought to be explored much more. Creating ever bigger AI models is not a solution to all problems involving intelligence, in fact, there are good reasons to believe that scaling up is unlikely to solve many of tasks humans carry out effortlessly on a daily basis, and this may not environmentally feasible anyway. Something will have to give, and this should involve creative innovation, not simply making models bigger.