Edge Computing and The Battle for Market Share
06 Jan 2022 |
IN-6394
Log In to unlock this content.
You have x unlocks remaining.
This content falls outside of your subscription, but you may view up to five pieces of premium content outside of your subscription each month
You have x unlocks remaining.
06 Jan 2022 |
IN-6394
Inconsistency at the Edge Breeds Confusion |
NEWS |
“The Edge” was one of the most written about technology areas in 2021, but inconsistency in the terminology used to describe edge technology often leads to confusion. Whether it be “Public Edge”, “Co-located Edge”, or “Private Edge” the inconsistency is largely a result of the perspective from which an edge solution is viewed. A co-located edge solution is a cage or square footage in a neutrally hosted data center from the enterprise perspective. An example would be a remote server on the enterprise network located at a stock exchange facility where milliseconds count for an algorithmic trading application, where it is executing trades as very sensitive thresholds are met. To a telecommunications operator however, a co-located edge solution is one where they partner with a third party such as a hyperscaler to deploy their technology at a telecoms site, a cell tower, or data center. A typical example of this would be Amazon Web Services (AWS) Wavelength. And here is where the confusion is well illustrated, because to the enterprise AWS Wavelength is a public edge solution.
Perhaps a more measured approach that looks at where and how edge solutions are deployed will avoid this confusion. There are three categories to consider using this approach; “Carrier Deployed”, “Third Party Neutral Host or Hyperscaler Deployed”, and “On-Premises Deployed”. This area of technology is still emerging, and the full extent of what edge computing will enable is not yet understood, this is perhaps why the roles of the different technology companies operating in this space are not yet defined. Some examples of the different deployment methods are discussed below as well as the pros and cons associated with them.
Close to The Edge – Edge Deployment Types |
IMPACT |
Perhaps the simplest deployment method, at least the one with the least grey areas is the “On-Premises” edge deployment, where computing equipment is placed in an enterprise operated environment that is outside of their traditional data center. This equipment may be a ruggedized server supplied by hardware Original Equipment Manufacturers (OEMs) such as the Dell XR12 or a hyperscaler product that extends the hyperscaler environment into the enterprise such as AWS outposts. The enterprise would typically deploy this hardware in an operating environment where it can quickly and efficiently process input data and act on the results without the need to relay the data to the data center or cloud for analysis. An example would be a server that performs quality assurance analysis on a video stream from a machine that produces thousands of components a minute. The delay is introduced by sending the video feed to the cloud for analysis and waiting for the analysis to be returned could result in large amounts of defective products being produced before a fault is corrected. The cost of sending the video feed to the cloud would also be considerable.
Typically, an on-premises deployment will be more expensive to deploy and maintain but it does afford the enterprise the ability to deploy a solution that does exactly what they need with no compromises, and it is the most secure deployment, with the enterprise having full control of every component in the solution.
The “Carrier Deployed” edge deployment, as the name suggests, is an edge solution deployed by the carriers. It may be proprietary technology, OEM supplied hardware, or a hyperscaler supplied extension of their environment, such as Azure Edge Zones. The important distinction to make is that the carrier is the driving force behind the deployment having determined the need for the service and committed the resources to deploy and maintain it in the same way they would make the decision to deploy a cell tower in a specific location. The use cases for this type of deployment are more general and still evolving, an example being where a carrier deploys edge computing facilities at stadiums or arenas where events take place, and the user experience can be enhanced in real time through Augmented Reality (AR).
This deployment, unless commissioned by an enterprise for their exclusive use, will typically be available to anyone who subscribes to the service. Because the service is shared and the deployment costs are borne by the carriers, the cost to use this service will be considerably less than an on-premises deployment and will not need the up-front investment from consumers as the carriers will expect to recoup those costs over the operating life of the equipment. The shared nature of this service does mean that the end user has much less control over the environment and the security of the data being processed will need extra consideration. Perhaps the biggest downside to the consumer of this type of deployment is that they will have to wait until the carriers decide that the economics of a deployment to a specific geographical area are worthwhile.
“Neutral Host and Hyperscaler Deployed” edge solutions are commercial edge deployments that are driven by third parties or hyperscalers. These deployments may involve a degree of collaboration between multiple entities, including carriers, but the commercial initiative and drive is coming from the neutral host or hyperscaler, an example of this type of deployment being AWS Local Zones.
This type of deployment would enjoy the same benefits as the carrier deployed solutions but are likely involve a larger investment and to have a larger footprint, which would bring more compute power to specific geographic areas and increase the size of the workloads that could be performed. The downsides are also very similar to carrier deployments, the main one being that to benefit from this type of deployment requiring the equipment to be located in an area where it has been deemed economically viable to install such a facility. To put this into perspective, in North America it is expected that by 2024 there will be eighteen AWS Local Zones.
The Sharpest Edge – Which Deployment Method Will Dominate |
RECOMMENDATIONS |
The Edge as a technology space is still evolving, and it is difficult to say which deployment method will be dominant long term. At this phase of the technology roll out we are seeing deployments driven by critical use cases, where the cost of implementation is outweighed by the benefits. Examples include systems that prevent costly events from taking place such as unplanned factory downtime or defective part manufacturing, or deployments where the risk to human life is reduced as a result. This means that a large amount of the current deployments are on-premises deployments.
As edge workloads become more general and better understood we can expect to see more publicly exploitable deployments. Use cases will drive the adoption of edge computing and therefore will strongly influence which deployment method dominates and generates the highest revenues.
Right now, the hyperscalers have the resources, drive, and technical acumen to deploy their own edge solutions in geographically ripe locations, the low hanging fruit. But when considering the long-term picture for edge computing, with edge servers available to serve large percentages of the world’s population, the carriers and tower companies hold the real estate advantage due to their existing cell coverage of the same demographic.
The big question is whether the carriers have the appetite to self-fund edge deployments when the use cases are not fully understood and not yet generating enough revenue to return large scale deployment costs. Edge computing will generate significant revenues, so if they delay too long, the hyperscalers may have already harvested all the low hanging fruit that can help fund the wider deployment. If they go too early the deployment costs will take much longer to recoup and profitability for these services will suffer. As with many things in technology, timing will be key. What the carriers have in their favor is experience in rolling out and operating multiple generations of cellular hardware, drawing from lessons learned in that space may well place them in the best position to benefit from the opportunity that edge computing presents.