Verizon 2019: 15 ms Edge with Fortnite, Red Dead Redemption, & gaming service

Fortnite, Red Dead Redemption 2, God of War, Battlefield V, and Destiny 2 are all running on the Nvidia Shield. … Our library will consist of most or all of the top games,” Chris Welch at The Verge reports. It is moving into beta on a Netflix-like service.

Adam Koeppe, senior VP for network planning, told Mike Dano, “The latency speeds obtained through the company’s edge computing test in Houston were 15 ms.” That more than the 10 ms Koeppe expected in 2016 or the 5 ms CEO Vestberg discussed in January, but about the best practical today. The 5G air latency is at about 10 ms.

“We’ll launch the first mobile edge compute platform later this year.

… we think revenue there starts in 2021 and builds from there. … turning on the first markets in the not-too-distant future. ” CFO Matt Ellis at Morgan Stanley.

“The Intelligent Edge Network is a fundamental part of our strategy,” says CEO Hans Vestberg.

Chief Strategy Officer Rima Quresh adds, “Everything is going to the Edge. More and more, you want to get closer to where the application is”

Kyle Malady, CTO, provided more details. “So now a couple of things that are important in the Intelligent Edge Network that we’re actually deploying today and you’ll be hearing more about as we go through 2019 and 2020. First is our CRAN architecture. This stands for centralized RAN. And really it’s a design that centralizes the cell-site controllers and base band units into common hubs. And we call those CRAN hubs, as you would imagine. And really what those do is, those hubs connect to small cells and our radios out in the network

Really all this is, this is a simple concept, frankly. It’s just taking the compute that happens in the public cloud and moving it closer and closer to the Edge, to the point that you have the compute sitting in the CRAN hubs that we talked about earlier. Now why is that important? Because it really unleashes the currencies that Hans spoke about. And we’ve done a lot of testing, proof of concepts, with a lot software providers and application providers on this concept. And the performance that you can get by doing your application on this kind of network is incredible. It’s going to open up a whole new world of applications that we haven’t even dreamed of yet. So we’re really excited by this. We will have our first implementations of it at the end of the year and more to come there.”

Verizon is rebuilding transport and core in the OneFiber initiative. That’s working well for them; Lee Hicks says they are saving 50% the first year. Even without building an Edge Network, the rebuild lowers latency dramatically.

It would be natural for them to be installing Edge Compute at the same time, but nothing is announced.

Here’s the pr

Verizon successfully tests edge computing on a live 5G network, cutting latency in half

Test paves the way for next generation network experiences, including wireless AR and VR on the Intelligent Edge Network

Email Print Friendly ShareJanuary 31, 2019 12:10 ET Source: Verizon Communications

HOUSTON, Jan. 31, 2019 (GLOBE NEWSWIRE) — Verizon engineers have successfully tested edge computing – putting compute power closer to the user at a network’s edge – on a live 5G network, cutting latency in half. Low latency – the time it takes for information to make a roundtrip – is important today for applications like online gaming and video streaming, and will be increasingly vital as next generation wireless experiences emerge.

In a newly formed 5G test bed in Houston, Verizon engineers installed Multi-access Edge Compute (MEC) equipment and MEC platform software into a network facility closer to the network edge, thus decreasing the distance that information needs to travel between a wireless device and the compute infrastructure with which that device’s apps are interacting. 

In this test, the engineers used an Automated Intelligence (AI) enabled facial recognition application to identify people. Using MEC equipment located in the network facility, the application was able to analyze information right at the edge of the network where the application was being used (instead of traversing multiple hops to the nearest centralized data center).  As a result, the engineers were able to successfully identify the individual twice as fast as when they duplicated the experiment using the centralized data center.  Putting the compute power closer to the user at the network edge greatly decreased the time to deliver the experience – a key benefit of the Verizon Intelligent Edge Network.

“For applications requiring low latency, sending huge quantities of data to and from the centralized cloud is no longer practical. Data processing and management will need to take place much closer to the user. MEC moves application processing, storage, and management to the Radio Access Network’s edge to deliver the desired low latency experiences, thereby enabling new disruptive technologies,” said Adam Koeppe, Verizon’s Senior Vice President for Network Planning. “This shift in where the application processing occurs, the inherent capabilities of 5G to move data more efficiently, and our use of millimeter wave spectrum is a game-changer when it comes to the edge computing capabilities we can provide.” 

Why latency is important for next generation networks
As 5G rolls out, we will see a rise in wireless applications that are heavily dependent on low latency. Consider, for example, vivid, immersive Virtual Reality (VR). This requires accurate syncing of video playback with the physical movements of the user. Any lag, even small, can lead to imperceptible differences between what a user sees and experiences, which is why some people get dizziness or nausea when using VR. If information for the headset is travelling over a network, you need super low latency to ensure there is no lag (wait time). In a future where cutting edge innovations like self-driving cars and remote-controlled robotics are envisioned, having near-zero latency is even more critical.  Hosting events at venues, industrial automation, retail, gaming, and video analytics with AI will all benefit from MEC technology.

What is edge computing and MEC?
As cloud computing has taken hold, there has been a trend to use large, centralized data centers where massive amounts of computing and storage take place. In the era of 4G, this approach enabled app development and innovations that significantly altered the way we use mobile devices. However, this approach has some drawbacks in how data flows. When someone wants to stream video, for example, their commands may have to travel several states away to access the needed storage or computing requested, and then data travels that same distance back. The distance is not really noticeable most of the time, adding up to mere milliseconds, but in a future where near zero latency is needed every millisecond matters.

Lower latency is one of the numerous benefits to come from introducing MEC at the edge of the network, but it is not the only benefit.  An increase in reliability, energy efficiency, peak data rates, and the ability to process more data through more connected devices are also benefits of introducing MEC technology.

“To achieve near-zero latency, where data moves many times faster than the blink of an eye, having computing functions closer to the user is a vital step,” said Koeppe. “With this test, we have shown how much of an impact the move towards a MEC-based network architecture can make.”

How to manage multiple edges in the age of 5G

December 19, 2018 0

By Dave Andrews, Chief Architect

Today it no longer makes sense to talk about computing at a single “edge.” A modern network consists of multiple layers, each with its own compute capabilities and latency tradeoffs. At Verizon Digital Media Services, we’ve been managing these tradeoffs and interactions between the two innermost layers: very large but sparse public cloud data centers and the content delivery network (CDN). However, the rise of 5G and multi-access edge computing (MEC) creates amazing new capabilities and corresponding levels of additional complexity.

5g_edge

5G/MEC offers developers compute resources at the very edge of the cellular network, enabling applications to run with significantly reduced latency, and with vastly increased network throughput to clients. This allows developers to build applications that can ingest, process, and deliver large amounts of data closer to customers, avoiding today’s longer, slower paths over the internet. However, more places to run code means more potential problems – and more headaches for developers. It’s also unclear how the 5G/MEC layer will interact and cooperate with other existing compute layers. No one wants to build the same application three times, once for each layer, nor perform three deployments, or monitor three systems.

At the EdgeNext Summit in New York this past October, I presented our vision for solving some of these issues with a cohesive edge compute solution. The key is for the solution to hide all of the multi-edge complexity without sacrificing the power afforded us by multi-edge capabilities. Developers need to be able to have their applications move as needed between the 5G/MEC layer, CDN, and public cloud data center based on factors like cost and load.

To do this, we need a simple language to define the entire ecosystem. We need tooling to consume the expression of that language and manage the complexity at each of the three layers. Our EdgeControl specification is a step in the right direction, but it is currently focused exclusively at the CDN layer. In the context of one of two hard things in computer science, this could be called EdgeCast Compute Control, or EC3 for short.

Once such a system is built, what would developers do with it? Here is how our cohesive solution would enable five core development tasks:

  1. Work in a development environment that’s representative of production.By ensuring a common framework and language support across all three layers, we can enable developers to continually test and run their code in whatever layer is cheapest. Often the inner layers will be cheapest for this purpose.
  2. Build tests to raise confidence in deployments.For testing, we can extend the concept of Edgecast’s Edge Verify to verify code correctness across layers. The resulting testing framework should have built-in performance analysis to further validate code at each layer, and it should also run the same suite of tests against canary deployments at each layer. Ensuring that code behaves identically at each different layer in the concentric circle is the primary requirement to enable failover and resilience.
  3. Built-in failover and resilience as use cases demand.This is a key new capability of our cohesive edge compute solution. Currently, failover and resilience are often afterthoughts in application architecture, because they can require tedious configuration of multiple components. This model of achieving resilience does not scale to a more complex (and capable) multi-edge world. Instead, we would enable developers to specify unique failover paths depending on the use case and on which layer the code is running. The three primary building blocks of these paths are:
    • a. “fail in” – retry at an inner layer of the concentric circles, where more compute resources are available.
    • b. “fail out” – retry at an outer layer of the system where latency is low, so it can make up for lost time.
    • c. “fail around” – retry at a neighboring node at the same layer in the system.
    We would also enable these failover path lengths and enable paths selectively for different regions, times of day, and other factors. After specifying failover as part of the EC3 specification, detecting failures, following these paths, and reporting on them should all be automated.For example, if a failure is triggered by excessive load or cost quotas at the MEC layer, we might want it to “fail in” to somewhere with more resources, or that is cheaper, to ensure a function eventually succeeds. Alternatively, in a scenario where latency consistency is paramount (such as an automated control system that is aggregating and analyzing multiple data feeds), and a failure of a function at an inner layer has consumed some available time, we might want it to “fail out” to complete the function at a location closer to the customer.The EC3 specification should also allow developers to define multiple paths for a function as it runs at each layer, enabling more complex directed-graph structures. Loop detection and automated conflict resolution would be required to make this work.
  4. Canary new code to production carefully.Our solution should enable flexible but controlled upgrades to developers’ applications. Ideally, this would build on technology discussed at Velocity a few years ago, and add alerting, and auto-advance and rollback capabilities. These concepts get much tricker in a multi-edge world. For example, canary deployments must be influenced by all currently defined failover paths. If a given piece of code has a simple “fail in” path, new features should be deployed from the interior layers outward, while deprecations should be rolled from the outside in, to ensure that we don’t failover to an older version of an application that is missing functionality.
  5. Consume analytics that provides visibility into production behavior.This would build on what EdgeControl can do at the SuperPoP layer to account for complexities such as the same function running at a different layer, having different performance and resource consumption profiles, each of which needs to be visible independently and in aggregate. In real time, our system would answer critical questions such as: What code is running where? Which failovers are happening where? What is user experience and latency like at different points in the network, or for different canary deployments?

This kind of solution isn’t just an option. We can expect developers to demand it. The various edges with their various capabilities will eventually come together to create something that is much greater than the sum of its parts. The sooner we can develop a cohesive system that provides an intuitive interface to developers that enables them to manage the additional complexity of computing across multiple edges, the faster we’ll start seeing the emergence of a whole new generation of world-altering applications.

Improvements welcome