Edge Computing Unlocks the Networks of the Future

Capgemini Engineering’s Multi-Access Edge Compute solution helps network providers unlock a new potential for the next-generation of networking applications.

Edge computing brings hyper-scale computing to applications close to the end device, such as virtual-reality headsets and autonomous vehicles. The closer to the device computing takes place, the lower the latency and the more lifelike the experience is for gamers and the faster self-driving cars can respond to their environment.

Edge compute is made possible by two things: ultra-reliable low-latency connectivity between devices, and the build-out of the network edge, which is the network located on the periphery of the centralized network. Edge compute is expected to become a reality as the world shifts to 5G because 5G will also include the deployment of scalable compute platforms at the edge of the network.

Cost-effective scalable compute platforms typically are used for cloud computing. However, some telecommunications operators, industrial equipment manufacturers, and smart-energy providers have not yet transitioned to the cloud. Network function virtualization (NFV), software-defined networking (SDN) and both infrastructure and platforms as a service (IaaS and PaaS) have been adopted by some of these companies, but edge computing is a more complex digital delivery model because third-party applications need to run inside the enterprise’s own network.

The problem is that many third-party developers prefer cloud-based compute for their server applications. They use the latest cloud-native technologies and deliver client applications through app stores. The challenge for enterprises is to build platforms that accommodate these developers so they can easily transition their application development from the cloud to dedicated enterprise networks.

Another challenge is that a large amount of data must be stored at the network edge. The platforms running at the edge need to handle streaming analytics and manage potentially hundreds and thousands of devices. At the same time, they need to be able to manage new use cases that arise nearly every day.

Such deployments require evolved ecosystems that developers are comfortable with and that are in an almost constant state of evolution. A top priority for any ecosystem is to ensure platforms are easy to build, install, manage, operate, and monetize. Open source is the answer to both requirements.

Containerization technologies such as Docker and Kubernetes are driving service portability across platforms. They are widely used, enjoy a large and growing base of users and contributors and are preferred by developers. Software-defined storage such as CEPH, streaming analytics frameworks such as Kafka and Spark, and AI frameworks such as TensorFlow are driving application and platform development in edge computing. Cloud-Native Computing Foundation (CNCF) projects such as Prometheus, Envoy, service-mesh architectures, Linkerd, gRPC, and Fluentd are critical for standardizing platforms and interoperability between operators and enterprises.

At the same time, OpenStack has matured and is being adopted by many enterprises, particularly the major telecommunications providers. And the StarlingX project aims to bring this community closer to the edge. However, scale and complexity are tough challenges that need solving. And NFV/SDN and telecommunications-centric orchestration systems are not exactly the favorites among edge application developers who are used to three-click run-cloud computing.

Most developers don’t understand the complexity of carrier networks. So, some of the open-source orchestration projects like ONAP may have to consider lightweight developer-centric orchestration and marketplace platform. However, ONAP is also important for network edge when it comes to managing highly available VNFs on the edge. Security projects are still evolving but are likely to develop rapidly as they are essential for providing critical support to the platforms.

Open-source projects drive interoperability for the simple reason that no application developer would want to build dozens of different versions of their application to suit the dozens of operators and enterprises that all have unique platforms. As these companies adopt common IaaS, PaaS and orchestration architectures, costs will come down and developers to be able to deliver innovative solutions more efficiently.

For over 20 years, open source has been a constant driver of innovation. But new domains bring new and competing projects that can create silos. Consolidation in key areas like edge infrastructure software, orchestration, etc. is necessary. But communities often don’t work very well when options are constrained. On the other hand, building too many projects can stretch the domain too far and the community loses focus. It is therefore essential to find the middle ground and keep the focus on application developers and their contributions, along with the three pillars of datacenter technologies: compute, network, and storage.


To find out more about our edge compute capabilities, download the brochure:


Shamik Mishra Vice President (VP)
Contact us

MEET OUR EXPERTS

You can work with a company built for now,
or you can with one engineering the virtual networking software of tomorrow.

GET IN TOUCH