TinyML-Based Machine Vision Has Large Commercial Potential
|
NEWS
|
Bringing machine learning (ML) to all forms of devices has been the key in enabling distributed intelligence. This vision is shared by many Artificial Intelligence (AI) vendors, including Arm, Qualcomm, and Google. As such, these companies have been championing the development of the Tiny Machine Learning (TinyML) ecosystem. TinyML is broadly defined as an ML technology that enables the performance of data analytics on hardware and software dedicated for low-powered systems, typically in the mW range, using algorithms, networks, and models down to 100 Kilobytes (kB) and below. TinyML applications range from the detection of ambient temperature, vibration, and voltage to identification of images, voice, and video.
Among these applications, TinyML-based machine vision is expected to play key roles both in consumer devices, such as wearables and smart glasses, and enterprise equipment such as security cameras and ambient sensors. Some of the key use cases include facial recognition-based auto-wake and auto-sleep function, gaze tracking for advertising, and occupancy heat map and smart triggering and interaction with wearable and surrounding infrastructure. These use cases have large potential as they bring intelligence to infrastructure and devices that previously relied heavily on cloud processing and high Internet bandwidth. In addition, this allows AI systems to meet security and privacy requirements from various government agencies and jurisdictions.
Qualcomm Offers TinyML Hardware and Software Solutions
|
IMPACT
|
Since 2015, Qualcomm has been at the forefront of TinyML research, particularly in the domain of machine vision. The company has launched both hardware and software solutions for solution developers that are looking to incorporate TinyML in their product offerings. Qualcomm’s hardware solution, QCC112 System-on-Chip (SoC), is optimized for always-on machine vision workloads. The SoC features an ultra-low-power microcontroller (MCU), a streaming array processor, a vision accelerator, an embedded power management unit, and custom memory. As a result, the SoC is capable to run computer-vision algorithms while operating continuously on very little power—less than 1 milliwatt (mW). Traditionally, image sensors consume 10 mW to 1 W of power, with extra processing required on-device or in the cloud. TinyML enables QCC112 to operate alongside an image sensor, recognize a specific type of image or gesture which it has been trained for, and provide an output based on ML inference. This offers the extra benefits of low latency and privacy.
Creating embedded ML models in Internet-of-Things (IoT) modules and sensors will also require end developers to adapt their AI models to fit the limited power budget of these devices through a process called quantization. In recent years, there has been a lot of research put in place to advance quantization techniques. Major AI frameworks, such as TensorFlow (TensorFlow Lite by extension) and PyTorch, support quantization through various methods, such as post-training quantization and quantization-aware training. However, the aforementioned quantization processes require massive data sets and extensive training. This is going against current ML trends, which focus on small data and ease-of-use.
In 2019, Qualcomm announced a new quantization measure, known as data-free quantization, which involves no extra data and training, and can be deployed after training via an Application Programming Interface (API). Data-free quantization is currently available in Qualcomm’s Neural Processing Software Development Kit (SDK). The company then followed up with another software offering known as the AI Model Efficiency Toolkit (AIMET) in 2020, which supports network compression and data-free quantization of AI models based on TensorFlow and PyTorch.
Clear Advantages in An Increasingly Competitive Landscape
|
RECOMMENDATIONS
|
Still in its nascency, the TinyML market has shown a lot of potential, but will need some time to mature. According to a recent report on Very Edge AI Chipset for TinyML Applications (AN-5037), a total of 2.5 billion devices are expected to be shipped with a Tiny Machine Learning (TinyML) chipset in 2030. Qualcomm is by far not the only company involving in TinyML-based machine vision. CMOS vendors such as Sony and HiMax are also integrating TinyML chipsets into their CMOS sensor. In comparison with CMOS vendors, Qualcomm is able to better enable developers through their end-to-end portfolio. Qualcomm’s heterogenous AI portfolio, which includes CPU, GPU, DSP and ASIC, means it is powering many existing IoT solutions, offering a large installed base for TinyML applications. Leveraging the company’s end-to-end product offering and strong presence in the IoT space, developers can create TinyML applications without interoperability concerns or having to deal with the complexity of hardware and SDKs from different vendors, which in turn shortens time-to-market and reduces market fragmentation.