“Where is the edge?” is one of the most popular questions I heard at telco industry events. The answer often comes as “the edge is where it needs to be.” Some understand this to mean wide distribution of computing hardware that will make the telco edge cloud. Here, I will argue why wide distribution of computing is unlikely to happen in telco networks. The reasons are many – I will not go through them. Instead, I will address one factor only. To position the context, this applies to the off-premise or data center edge computing services for consumers and enterprises. See my previous article addressing the on-premise edge computing opportunity.
The Telco Edge Cloud
One of the arguments for edge computing in telco networks is that telcos are closest to end customers. Telcos also own infrastructure which could be turned into data centers. These data centers become the backbone of the edge cloud through which telcos can offer differentiated services pinned on low-latency and low-jitter performance. CORD and MCORD are examples of open source projects aiming to develop blueprints for the telco cloud.
How many data centers will the service provider need to build? I would argue that it’s not too many. The idea that we will have tens of thousands of small data centers in telco networks is not realistic at the time being.
Data Center Economics
Data centers have an optimal point of operation that minimizes their cost (design density). The computing infrastructure is optimal when it balances power and cooling with space requirements. The operating costs are at their worst when the computing infrastructure is operating well below its provisioned capacity. In this case, there is ample available power and cooling that is left stranded. On the other hand, should the computing infrastructure operate at peak capacity, space becomes the limiting factor while utilization of power and cooling is maximized.
The cost of power and cooling per computing unit is more sever than that of space. Therefore, it’s better to deploy computing infrastructure above the data center design density than below it.
Workload Variability
Data centers experience variable demand on computing resources over time. Customers provision their computing requirements taking into consideration such variability. From a statistical perspective, large data centers with many users and different types of workloads, are better capable of averaging variability of demand than small data centers. Small data centers running few workloads for few users will experience periods of high demand on computing power followed by periods of low demand.
As a result, small data centers will need to provision for a higher density of computing power above their average expected workload than large data centers. This reduces the efficiency of data center operations and increases cost.
To illustrate this point with an example, consider edge computing resources catering to a gaming application. The telco would need to reserve computing capacity over its service area to meet the performance SLA committed to its customers. If not at peak load, the amount of reserved capacity will lead to much unused resources. The small data center has diminished ability to share that unused capacity with other workloads.
A Question of Centralization
Given that distribution of computing resources increases financial costs, the question becomes how much can telcos centralize the edge while still meeting the performance requirements and service economics? Our work at Xona Partners shows that the answer lies in the potential applications (performance requirements, monetization potential) as well as the architecture of the telco network. Moreover, existing Cloud SW stacks are well optimized for centralized cloud and not necessarily the distributed edge cloud. As a results we believe that, in the foreseeable future, the telco edge cloud will be ‘more far than near’!