Decreasing Cost and Increasing Applications of Vision AI
|
NEWS
|
Machine Vision (MV) has found its place within manufacturing and supply chains, but restrictions in both hardware and the available computing power to process more varied imaging has limited wider applications. MV is typically used to spot product defects or scan inventory and repetitive tasks with minimal variance; therefore, a relatively low range of image processing is required.
However, recent developments of Artificial Intelligence (AI)-powered systems that are enabling more enhanced perception capabilities are beginning to allow MV to be applied to more varied and complex use cases, such as asset tracking and robot navigation. And it’s not just the capabilities of AI that are advancing, but also the usage costs. According to OpenAI, the cost of using certain AI models has dropped by as much as 90%, mainly due to the continuous advancement of technology and the intensification of competition.
ABI Research forecasts that global revenue of camera systems in transport & logistics will reach US$5.7 billion by 2028, with global shipments of camera systems in the industry growing at a Compound Annual Growth Rate (CAGR) of 40.9% from 2023 to 2028. The opportunity for AI-powered MV systems is huge and, given that many image recognition systems run on Deep Learning (DL) models, MV systems will also become smarter over time, meaning that as adoption grows, the capabilities of the systems should follow suit.
Potential for Asset Tracking and Autonomous Navigation
|
IMPACT
|
Leading Internet of Things (IoT) vendors have acknowledged that MV could begin to replace more traditional forms of asset tracking and visibility solutions, such as Radio Frequency Identification (RFID), particularly where active and passive RFID solutions fall short on providing true real-time visibility of assets. This has become increasingly noticeable in more dynamic and outdoor environments where manual scanning or active tags are needed to be applied en masse to provide end-to-end tracking.
Blue Yonder, a leading provider of supply chain execution, planning, and commerce solutions, has established its emerging yard management solution on MV, with a Proof of Concept (POC) rolled out at two Third-Party Logistics (3PL) providers and a large grocery retailer, with expansion of the technology planned through 2024. Powered by Machine Learning (ML), the computer Vision techniques (based on DL models) are used at the yard gate to identify trucks via number plate or trailer Identification (ID) numbers, enabling the system to automatically check trucks in as they arrive at the yard. The system then places a digital tag on each trailer, meaning that as the truck moves around the yard, the digital tag is followed, maintaining the trailer identity. Cameras are strategically placed in each section of the yard to achieve real-time location of the trailers without the need for manual tag scanning that RFID-based tracking solutions require.
In 2021, yard management solutions provider Peripass partnered with Robovision, a vision AI company, to deliver a POC project for H.Essers in Genk, Belgium using vision learning and smart cameras to build a Real Time Trailer Location Service to localize trucks in the yard. The AI was trained to identify trailers both day and night, in different weather conditions, and from different angles, using spatial and temporal information to identify, track, and differentiate almost identical trailers. The solution developed a 97% accuracy level, pinpointing trailers to a 1 Meter (m) accuracy in comparison to the original 10 m accuracy achieved by a Global Positioning System (GPS).
On top of this, MV also looks poised to become a cheaper, more effective set of eyes for mobile robots. Light Detection and Ranging (LiDAR), the go-to for enabling Autonomous Mobile Robot (AMR), Autonomous Guided Vehicle (AGV), and autonomous forklift navigation, has generally won over MV systems, despite the higher cost of the hardware, given its strengths in distance measurement and depth perception that would need to be trained in an MV application. But the increasing accessibility of small, powerful computer processors for edge AI applications and the growing ability of AI to process visual feeds and better understand complex environments could see MV challenge LiDAR’s industry dominance.
RGo Robotics, a startup operating in this space, is delivering its AI-powered Perception Engine, running on ultra-low-cost, low-power hardware to deliver more human-like perception to mobile robots. The added processing sophistication that the AI-powered system offers allows the robots to operate over larger spaces without the need for connectivity, interacting with humans in a more intelligent and reactive way. It also allows automatic map creation for the connected fleet of robots, removing the need for facility mapping on implementation, and allows robots to automatically adjust movements based on changing natural features. Given that the perception capability is entirely driven by the AI software, integration of the solution becomes cheaper and more scalable.
Pinpointing Immediate Industry Opportunities Is Key
|
RECOMMENDATIONS
|
While AI and MV are not new to the supply chain, creating a stronger synergy between the two technologies helps break down many of the walls previously facing MV viability.
For end users, technologies like MV, particularly with AI augmentation, always offer an opportunity for full automation, but the technology can also play a vital role in supporting manual workers both from efficiency and safety perspectives. Applying MV to forklifts, for example, can provide drivers with additional perception support, hazard spotting, warnings and alerts, inventory location assistance, and pick verification. Blending AI and MV with manual workers creates an optimal blend for industrial task completion.
Identifying the right environments that this level of perception can support is also critical, both from end user and solution developer perspectives. AI-based MV solutions have an edge in more complex and variable situations with changing conditions, but should not be thought of as immediate replacements in all asset tracking and navigational scenarios. Most industrial processes are repetitive and predictable, and organizations would gain nothing in trying to integrate AI-powered perception. MV providers should focus their efforts on specific areas of the supply chain, such as yards and ports, where many end users report that current tracking solutions fall short. And from a robotic guidance perspective, last-mile delivery, service robots, outdoor machinery, and warehouse equipment that adjusts task routes more frequently are all areas where AI-powered MV will gain much better traction than in more uniform environments.
Partnerships and system integrations with companies operating in these areas will be key to extending MV’s reach. Continuous development of the solution through DL to offer a POC to end users will also help smooth adoption, and pricing should reflect this learning cycle if complete accuracy is not guaranteed from the outset. Successful applications of AI will also be heavily reliant on finding the right uses cases that provide immediate, tangible value, and companies should remain wary of vendors “AI-washing” when selecting or integrating with MV solutions.