HyperScale Data Centers Featured Article


Hyperscale Data Centers Architected to Handle Next-Generation Network Infrastructure


September 14, 2016

It often seems that IT infrastructure is expanding at a hyperscale rate, as networks, data centers and endpoints evolve to accommodate a content and data-hungry world. Networks are being re-architected to operate more efficiently and accommodate massive amounts of data, and they are increasingly connecting a new breed of data center designed specifically to handle this data explosion.

The dawn of the hyperscale data center is upon us, as cloud computing giants, carriers and service providers and even large enterprises are rethinking and re-architecting their data centers to keep apace of the era of mega data. According to Allied Market Research, the global hyperscale data center market will reach $71.2 billion by 2022, largely thanks to an explosion of cloud-based resources and services.

Hyperscale data centers are a product of an increasingly resource-hungry technology culture that traverses just about every vertical market. As a result, data centers are changing significantly and the hyperscale data center is markedly different from the traditional data center architecture. It reflects the trend toward off-the-shelf, bare metal hardware built to specification and typically running open source software developed to maximize efficiencies and cost savings.

                   Image via Bigstock

This new breed of data center is usually “green” out of necessity, since power and energy consumption at such a massive scale can easily get out of hand. Companies like Google, Facebook and Amazon are building data centers in cold climates like Scandinavia and Russia, and some are using sea, lake and fjord water as part of the cooling process. And Microsoft went as far as to drop a data center into the Pacific Ocean last year as part of a test based on reducing energy costs and overall footprint.

Beyond the physical logistics, the new breed of hyperscale data centers is tailored toward running cloud applications, which are easily portable and may be shifted from server to server as workloads scale and fluctuate. Traditional data center design and management simply cannot handle the data scale and workflows hyperscale facilities support.

The hyperscale data center seems to be an organic extension of an ever evolving IT landscape. As network infrastructure transforms in the era of always-connected mega data, data centers are transforming at the same pace. It’s a new era of computing and yesterday’s data centers simply can’t handle the next generation’s infrastructure. 




Edited by Maurice Nagle

Article comments powered by Disqus







Click here to share your opinion - What is the "next big thing" to software define in your enterprise or data center?






Featured Blog Entries

Day 4, Cisco Live! 2017 - The Wrap

Day 4 was the final day of our first ever Cisco Live! We had a great show, with many great conversations and new connections with existing and potential end users, resellers, partners and job hunters.

Day 3, Cisco Live! 2017

Day 3 of Cisco Live is history! For Fiber Mountain, we continued to enjoy visits from decision makers and influencers who were eager to share their data center and structured cabling challenges.

Day 2, Cisco Live! 2017

Tuesday was Day 2 of Cisco Live for Fiber Mountain and we continued to experience high levels of traffic, with many high value decision makers and influencers visiting our booth. One very interesting difference from most conferences I attend is that there are no titles on anyone's show badges. This allows open conversations without people being pretentious. I think this is a very good idea.

Day 1, Cisco Live! 2017

Fiber Mountain is exhibiting at Cisco Live! In Las Vegas for the first time ever! Our first day was hugely successful from just about any perspective - from quantity and quality of booth visitors to successful meetings with customers.

Industry News