Las Vegas Announces the Deployment of Its 5G Private Network for AI Use Cases
|
NEWS
|
Kicking off this year’s North American Mobile World Congress (MWC) conference in early October, the City of Las Vegas announced that it has deployed a private 5G network, utilizing built-in Artificial Intelligence-Radio Access Network (AI-RAN) technology. Built on NVIDIA hardware, U.S.-based network solutions provider Vapor IO constructed the network, integrating a Large Language Model (LLM) for AI inference, as well as computer vision capabilities through its Zero Gap AI offering. The city is simultaneously deploying Internet of Things (IoT) cameras and sensors for use cases such as missing persons location, as well as public transport enhancement. Emerging as the world’s first-of-its-kind private 5G AI-RAN, the investment is poised to become the backbone of emerging technologies for community well-being in the area, as well as facilitating prospective economic opportunities across verticals.
Although myriad 5G-backed AI use cases have been identified and explored, a network operations perspective may also prove to be equally fruitful to highlight for enterprises looking to implement AI-RAN integrations into their technology stack.
AI-RAN for Private Networks Promises Immense Operational Efficiency Gains
|
IMPACT
|
AI-RAN has become a much-hyped topic within recent months, with the introduction of the specialized AI-RAN Alliance featuring industry titans such as NVIDIA, Ericsson, Nokia, and Amazon Web Services (AWS), among others. The alliance has outlined three sub-groups within this space, namely:
- AI-and-RAN: Involves mapping the infrastructure of the two technologies in creating new use cases
- AI-on-RAN: Looks at AI use cases and how new opportunities may arise through RAN-based enablement
- AI-for-RAN: Indicates how AI technology will enhance the operation of RAN in networking—the main focus of this ABI Insight
Machine Learning (ML) for Self-Optimizing Networks (SONs) has been around for some time; however, the virtualized RAN software stack coupled with NVIDIA Central Processing Unit (CPU) + Graphics Processing Unit (GPU) architecture, as implemented in Las Vegas, gives rise to greater evolutionary capabilities for more demanding use cases, and more direct efficiency gains.
Implementing AI-RAN in private networks brings about several advantages:
- Cost Savings through Scalability: RAN is traditionally capital-intensive, with some estimates observing that it takes up to 60% of the mobile industry’s Capital Expenditure (CAPEX). This cost burden is further seen in private deployments—savings are made when considering the lessened need for hardware components in private 5G deployment, due to this softwarization effect, which will lower overall Total Cost of Ownership (TCO).
- Dynamic Resource Allocation: Through offloading RAN processing tasks to GPUs and Data Processor Units (DPUs)—traditional components of AI—enterprises can experience faster processing times. Firms often deploy private networks as a means of controlling network performance and speed, and this development entails that they can reap further benefits from the already (relatively) swift connectivity brought about by the technology.
- Reducing Complexity: As observed within the burgeoning Artificial Intelligence Operations (AIOps) industry, automation makes previously complex tasks user-friendly. This extends to networking, in which different customization parameters must be manually monitored and implemented. Further, the use of AI-RAN will allow for easier deployment of use case-specific AI software. Combined with the scalability paradigm noted earlier, companies can stand to exploit AI-RAN-enabled networks as an evolutionary connectivity tool to more smoothly and quickly adopt new tech as it is released.
When employed in a private networking context, AI-RAN has the potential to unleash a multitude of new use cases and operational efficiencies that were not previously possible with AI-led software alone. However, considering the novelty of the technology, there are understandably a variety of challenges and roadblocks that must be faced by vendors and enterprises alike before they can utilize AI-RAN to its full potential.
Full-Stack, Ecosystem-Wide Partnerships Are Essential in Bringing AI-RAN for Private Networks into the Mainstream
|
RECOMMENDATIONS
|
A few challenges must be considered when courting AI-RAN deployment for private networks:
- Data Availability: Especially when compared to public networks, which benefit from massive amounts of user-generated and proprietary data logs, private networks require specific data, filling the niche that the enterprise occupies. The same challenges and complexities firms face when training LLMs for knowledge management and automation will manifest during the deployment of AI-RAN.
- Skills Shortage: A substantial skills gap exists within this field and training employees in the practice is not financially sound. Without dedicated staff, it is difficult to decipher how AI-RAN would fit into operations, and how to phase out old practices for new, tech-heavy ones. Restructuring operational workflows will not only take time, but extensive human capital and effort to carry out.
- Implementation Costs: Although it promises savings on hardware, compared to more traditional connectivity options, utilizing AI-RAN for private networks requires more specialized network design and professional services.
Although these barriers may seem significant, there are several ways that vendors can attract and support potential clients:
- Vendors must forge partnerships with other players and AI-RAN enablers in the ecosystem to demystify the technology for customers. AI-RAN is a new technology, and clients lack an understanding of the solution, so they will not know what to expect, or where to start deploying it, making packaged solutions with preferred partners a more favorable option. AI-RAN provides a new pathway for entry for traditional AI and Generative AI (Gen AI) players, namely hyperscalers (with GPU-as-a-Service (GPUaaS) offerings) and AIOps (with LLM monitoring) firms, to enter the previously closed telecommunications industry. As many firms are new to collecting large amounts of quality data, or may have silos throughout the organization, aim for synergies with companies such as Qlik’s data management solution for greater accessibility, or Hazy’s synthetic data service to bulk up training data.
- Consulting, training, and development specialists must be included as part of the AI-RAN service package, as well as the development of middleware to support the transition from legacy systems. Change management can be costly for firms to outsource, which is why vendors must ensure that the transition from legacy systems to AI-RAN is smooth—through the provision of human capital and standardized training programs.
- AI-RAN must be offered in as-a-Service offering packages. The virtualization of the system is conducive to adopting this business model due to the option to scale easily to meet enterprise demand. Although this may increase economic risk in the short term for vendors, recursive payments through subscription service offerings will attract clients due to affordability, also creating long-term captive opportunities for vendors.