Will the real edge computing please stand up!

The edge computing trend that is pushing the cloud into the fog has kicked off in earnest, but there are few true embodiments of this vision — so what is going on?

Will the real edge computing please stand up!
Thinkstock

In its simplest form, edge computing is basically about putting an extra layer of computing in the network between the end device and a centralized data center, which is commonly referred to as being located in the cloud. The “edge” moniker implies that this extra layer of computing is as close as possible to the end device.

The primary reasons for deploying edge computing is to significantly reduce the network processing delay for time-critical applications, and to greatly reduce the amount of data that needs to be carried further upstream into the network.

How it all started

The idea of having computing at the edge of the network, such as a gateway in a remote enterprise office, has been around for a long time. However, typically these have been for very specific use cases and in comparatively low volumes. It has only been more recently, driven by newer uses cases and enabling technologies, that edge computing has taken on more significance and added value in the networking world.

The initial driver for edge computing has been the steadily increasing number of M2M deployments. For example, industrial M2M deployments often require fast control decisions in the network based on a large amount of data coming from many different M2M sensors. Edge computing is ideal for these cases.

The leading industry alliance to address this M2M use case is the OpenFog Consortium that was formed in 2015. The consortium has been spearheaded by Cisco and cleverly includes a meteorological term (fog) in its name to reflect that it is an evolution of the other famous meteorological name-bearing computing approach, cloud computing.

Name play aside, there is a specific technological relationship between edge computing technologies — like those from the OpenFog Consortium — and cloud computing, and that is in the use of network softwarization technologies like NFV and SDN along with commodity computing hardware. This combination of technologies enables fast and low-cost deployments and greatly increases the attractiveness of edge computing to address more use cases.

The final piece of the puzzle is being provided by rapidly advancing big data analytic software packages that allow efficient parsing and analyzing of large amounts of data generated by M2M and other deployments.

5G kicks it up

In the mobile world, edge computing has been identified as a critical part of the still-developing 5G architecture to handle a wide range of uses cases ranging from location-dependent services to intelligent video analytics to augmented reality content delivery.

The key standardization forum for 5G edge computing is the ETSI Mobile Edge Computing (MEC) group. The MEC group’s focus is on enabling third-party applications to be hosted in the 5G mobile network edge. This is expected to spur innovation at the edge of mobile networks, which has in the past been the exclusive domain of radio network equipment providers.

As in the previous M2M case, the MEC approach is being enabled by a combination of network softwarization technologies, low-cost computing hardware and big data software advances.

It is interesting to note that active members of ETSI MEC include traditional telecom companies like Nokia and Huawei as well as more computer-centric companies like IBM and Intel, which in the past typically did not actively participate in mobile network standardization forums.

Where is it all heading?

Edge computing has made significant headway in the past few years as we have sketched out above. However, my belief is it is just not going far enough. It sometimes feels like everybody is jumping on the edge computing bandwagon with some product that a few years ago would have been called a deep packet inspection solution or some novel traffic-shaping device. This is not really edge computing, in my opinion.

This really is just opportunism that is confusing the space. However, I am seeing some very interesting developments happening in the European research community that suggest to me that the real 5G edge computing will soon be standing up.

Specifically, I see a trend to have what I call “full-stack relocations” to the edge in addition to the existing OpenFog and ETSI MEC-type deployments. By that I mean that while existing OpenFog and ETSI MEC-type deployments do specific — and very useful — processing at the edge, they still require some final processing to be done at the centralized cloud data centers.

In the full protocol stack relocation approach, the research community is envisioning dynamically moving all required processing to the edge and potentially eliminating or at the very least radically distributing the traditional cloud model for many use cases. I believe when this research graduates into practice, we will be looking at wholly different models where quite literally the whole world can be your cloud whenever and wherever you need it.

This article is published as part of the IDG Contributor Network. Want to Join?

8 highly useful Slack bots for teams