How to Fight the Coming Latency Wars

Article by Jason Bloomberg

Increasing demand for real-time computing combined with the power of AI are driving the buildout of the edge. The winners of this transformation will be the companies that can minimize latency.

We certainly live in an age of wonders. We have supercomputers in our pockets, a global Internet, and applications in the cloud. In less than a lifetime, our four-channel television, rotary dial telephone world has transformed, bringing futuristic science fiction to everyday technology reality.

AI continues to advance its penetration into our lives as it seeks ubiquity. The 5G rollout is well underway as consumers snap up the latest generation of 5G devices. Software infrastructure and applications are keeping pace with the rapid maturation of cloud-native computing.

Human nature being what it is, we now take the current technological state of affairs for granted, and we want more. Much more. Faster, better, and cheaper – especially faster.

The battleground of the near future, however, is not on our smartphones. It’s not even in the cloud. All these technology trends point to one nexus of exploding innovation and competition: the edge.

And on the edge, we will fight one battle in particular: the war over latency.

Understanding Latency

Latency essentially means the amount of time it takes for a request to go from its source (say, when you click a button in an app or on a web site) to its destination and for the response to find its way back.

The lower the latency, the better. We’d all love to have immediate responses, but unfortunately, zero latency is an impossibility. There’s always something slowing us down.

In fact, slowdowns come in three basic categories:

Transmission speed. Just how fast can the bits move down the wire (or fiber optic cable, as the case may be)?

For any message, there’s always latency due to transmission speed, for one simple fact: the speed of light. No matter what you do, nothing can go faster.

Light, of course, is quite fast enough for most situations (on earth in any case) – but its physical limitation on minimum latency can be a factor.

In one millisecond, for example, light travels 186 miles – the distance, say, from New York City to Baltimore. Indeed, message from one of these cities to the other might take longer than a millisecond – but it will never go any faster.

Network equipment. The original survive-nuclear-war design of Internet precursor ARPANET required it to establish multiple paths across a number of routers and other network equipment.

To this day, any internet request is likely to traverse multiple pieces of network gear, each adding a modicum of latency to the interaction. The more hops, the more latency.

Processing time. Once your request reaches its destination, you expect the application there to do something for you – and that something always takes a bit of time.

Some of this time has to do with the actual bit crunching on the CPU itself – but most if it generally involves interactions with the application’s underlying database.

If your database is in the cloud, then it is likely made up of multiple pieces that must communicate with each other in order to give you the right answer – communication that adds even more to the latency.

Read the entire article here.

SHARE THIS: