Alternative Intelligence Shows Great Promise for Robots
|
NEWS
|
Investors believe that neuromorphic computing—processing architecture that mimics the biology of brains—will eventually challenge NVIDIA’s chip supremacy by introducing energy and processing gains that far surpass the capabilities of the current commercial state-of-the-art. But beyond hardware, biomimetic algorithms offer significant advantages for robotics. The theory is that biomimicry—which hinges on dynamic memory and processing—can produce machine behaviors, reasoning, and responsiveness, akin to that of an animal or even a human being; potentially superseding the advantages promised by nascent efforts to adapt generative Artificial Intelligence (AI) for robotics.
Traditional Artificial Neural Networks (ANNs)—the backbone of most contemporary AI from image processing to foundation models—rely heavily on training data that can only be updated incrementally in real time, and at a large computation cost. However, new neural algorithms are demonstrating substantial real-time robotics performance increases. Emerging algorithmic approaches that can be deployed on current hardware, often by emulating neuromorphic architecture, include Continuous-Time Neural Networks (CTNNs) and closely related Liquid Neural Networks (LNNs). These algorithms excel at interpreting spatiotemporal data, improving responsiveness; are less dependent on training, creating their understanding models dynamically; can be run (and trained) on low resource chipsets such as Field Programmable Gate Arrays (FPGAs); and, like their hardware counterparts, provide dramatic energy reductions, enabling extended robot uptime in the field.
Biomimicry for Robotics Is Already Commercially Mature
|
IMPACT
|
Beyond the research ventures of Intel and IBM, university spinouts have begun commercializing these alternative algorithms for machine intelligence. Examples include the following:
- Liquid AI: Having emerged from stealth in December 2023, this Massachusetts Institute of Technology (MIT) spinoff leverages the “liquid” aspect of its AI model to provide an energy-efficient framework that continues to adapt and learn after training. The model has been applied to autonomous cars and drones, with Liquid AI claiming more responsive and adaptive results than the current state-of-the-art—with significantly less training and power consumption. Predictability and transparency are another key advantage of Liquid AI’s Intellectual Protocol (IP). Although commercially embryonic, Liquid AI is scaling for enterprise adoption beyond robotics and hoping to provide services akin to ChatGPT. Its algorithms are inspired by the nervous system of the nematode worm.
- Opteran: Opteran is a spin-out of the University of Sheffield, United Kingdom. Its approach does not leverage neuromorphic emulation, but instead a unique biomimetic algorithm derived from the study of honeybees. Opteran is a commercially mature, alternative machine intelligence Software-as-a-Service (SaaS) company. Earlier in 2024, it announced a partnership with SAFELOG, a large Germany-based warehouse robotics specialist. Opteran has achieved success via robust, low-energy localization solutions (analogous to Simultaneous Localization and Mapping (SLAM)) for autonomous robots, along with proprietary algorithms for managing camera blur and lens flare. The company has larger ambitions with a product roadmap containing solutions for machine vision and autonomous decision-making.
- ViVum: Operating adjacent to the defense sector, ViVum provides a suite of low-energy edge intelligence algorithms for robotics. Vivum utilizes approaches that include CTNNs and LNNs deployed on FPGAs. Applying these techniques to drones and wheel robots, ViVum has demonstrated considerable improvements in energy consumption and machine autonomy.
Dedicated neuromorphic hardware has also been commercialized on a small scale. Several vendors have manufactured small neuromorphic chips with limited capabilities for lesser sensor-supporting tasks. These products have demonstrated significant battery life extension for edge sensors. Companies active in this space include Aspinity, Innatera, POLYN, and BrainChip. The latter recently used neuromorphic hardware to showcase an edge Large Language Model (LLM) with impressive results; feasibly, such an innovation could be deployed to issue verbal commands to robots with minimal computational cost. Generally, these chips are designed to perform basic signal analysis and then wake and interface with a secondary device, such as a microcontroller, to perform higher level processing if warranted. By augmenting existing systems—reducing their active time—stakeholders can realize significant energy savings and improved performance via intelligent data pre-processing.
Current Generative AI Is Fundamentally Incapable of Significantly Extending Robot Capabilities
|
RECOMMENDATIONS
|
The long history of robotics and its utility is defined by discrete actions within controlled environments. “Robotic tasks”, i.e., doing the same task repeatedly and achieving the same result with millimeter precision, is where robots’ value has historically resided. Extending robotics beyond the assembly line—even the paltry distance to the warehouse floor—can cause environmental variables to exponentiate, resulting in fundamentally unsolvable problems. When using a robot, all contingencies must be taught; every edge case, scenario, action, and behavior must be taught in advance. This has long hindered autonomous vehicles; environments and people can create scenarios that have potentially never been witnessed before, let alone are approximately present within the training data. This is the current objective of technology leaders: to create databases of robot behaviors to approximate every conceivable action and scenario. For “general-purpose robots,” or robots that we expect to interact with the uncontrolled real world, this is a computationally impossible task.
Although inference and learning for robots has progressed significantly in recent years, solutions remain limited, slow, and computationally expensive. Current foundation models for robotics, such as the Toyota Research institute’s Large Behavioral Model and NVIDIA’s Isaac suite, will further extend robot applications beyond the assembly line and enable marginally more complex use cases in controlled environments such as material handling and picking from a crowded bin. However, efficiency gains are likely to be overshadowed by a lack of adaptability, preventing robot operation in unstructured, or unpredictable, environments such as construction sites or around human beings. Determinism is another significant issue. Large foundation models are a black box—convoluted internal logic between nodes creates unpredictable and unrepeatable behaviors. This is a key issue fueling safety and repeatability concerns for the crossover of generative AI and robotics. Advocates claim that CTNNs, due in part to the reduced number of nodes in the network, can provide greater transparency and predictability.
Proponents believe that biomimetic algorithms—notably LNNs—have the capability to extend robot adoption into new, unstructured, and complex, environments. Given the maturity and cost savings of alternative forms of machine intelligence, decision makers ought to think twice before spending on batteries and Graphics Processing Units (GPUs).