Fiber Mountain: Scaling Hyperscale

As the pace of information technology evolution continues its exponential growth trajectory, layers of virtualization have added to our ability to abstract the underlying technology, thus providing greater simplicity and flexibility to users.

From personalized user experiences to containers and hypervisors all the way down to software-defined networks (SDNs), virtualization has afforded us enormous benefits. And yet, at the core of all these abstractions are physical systems – the computers and networks that constitute the bare metal and cabling that the virtualization house of cards depends upon.

fibermountainThese virtual and physical worlds meet most often at data centers, those surreal landscapes crammed with racks of server technology, packed as tightly as the power and cooling systems that support them can handle.

And whether you’re talking about traditional enterprise data centers, the facilities that run web scale companies like Facebook or Twitter, or the truly massive installations that form the physical aspect of public clouds, today’s data centers must be larger than ever, and they must be built to grow even more gigantic – what we now call hyperscale.

Defining Hyperscale

Hyperscale means more than simply very large. A data center is hyperscale if it is also architected to grow cost-effectively. In other words, a hyperscale data center is designed to scale.

Today’s cloud-savvy IT professional is used to discussions of horizontal scalability, so it’s important to clarify the important difference between the unlimited scalability of cloud environments and the hyperscale of data centers.

Cloud scalability depends upon virtualization, and in many cases, such virtualization abstracts data centers into regions or even into hybrid clouds. Hyperscale data centers, on the other hand, are physical installations. Scaling data centers means more real estate, more or bigger buildings, more power, cooling, racks, and servers – and a more extensive physical network connecting everything together.

Up to this point in time, the primary metric driving the design of hyperscale data centers has been cost. Data center designers must maximize their ability to leverage massive economies of scale to drive costs out of the construction and operation of such facilities. As a result, even tiny savings can result in substantial benefits at hyperscale.

Hence the move from servers with cases to blades to multiple-server boards. Improvements in power and cooling. Shifting from expensive storage-area networks to cheap direct attached storage. And foregoing expensive brand-name networking gear for “white label” equipment, built by no-name suppliers in the Far East focused on minimizing cost at scale.

The Network Architecture Bottleneck

As hyperscale data centers grow in size, however, they eventually reach the limits of economies of scale-driven cost optimization because of the limitations of the physical network architecture. SDNs, of course, virtualize individual networks and the routing of traffic on those networks, but SDNs must run on physical networks that depend upon physical network connections between servers and various types of network equipment that support the control plane and data plane abstractions vital to SDN and cloud-based networking in general.

Because SDNs must be able to establish connections among virtual machines scattered arbitrarily across physical servers, both within each data center and potentially across multiple data centers, the underlying physical networks must rely upon some kind of “any to any” connectivity that allows for such arbitrary software-driven control.

Clearly, however, networking equipment that allows for such any-to-any connectivity will have limits on its scalability, especially as hyperscale data center designers move to increasingly inexpensive white label equipment. After all, there are only so many ways to physically connect everything to everything else.

As a result, hyperscale data centers are struggling with a bottleneck at the physical network level, a bottleneck that economies of scale cannot address. Instead, hyperscale data centers require a new network architecture that removes the any-to-any bottleneck.

Fiber Mountain: Rearchitecting the Hyperscale Network

Rethinking the network architecture for hyperscale data centers is what Fiber Mountain’s Glass Core® technology brings to the market. Not only is this technology able to replace the traditional bottlenecks inherent in today’s data center networking technology, Fiber Mountain can also abstract the physical connections to each server.

In other words, Fiber Mountain allows for software control of the connections between each piece of equipment in the data center and the network itself, removing the any-to-any bottleneck inherent in today’s hyperscale data center design, thus allowing economies of scale to drive costs out of such facilities unabated.

Furthermore, Fiber Mountain’s equipment costs substantially less than even the white-label equipment it replaces – a boon not only to the largest of large hyperscale data center providers, but to any modern data center that wants to take advantage of the best network architecture available.

There is now no reason to purchase traditional networking gear that introduces future bottlenecks to the scalability of your data center design. Remember, hyperscale doesn’t just mean really big. Hyperscale means the ability to scale the data center as needed. Now that Fiber Mountain has changed the economics of hyperscale design, there’s no reason for any data center designer to take a different approach.

Fiber Mountain is an Intellyx client. At the time of writing, no other organizations mentioned in this article are Intellyx clients. Intellyx retains full editorial control over the content of this article.

 

SHARE THIS:

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.