You don’t have to be Facebook...
It sounds about as exciting a term as any in the data centre world - but ‘hyperscale’ is both very much here, and yet very far away for many organisations. As a way of running a facility, it’s unquestionably at the core of how major businesses will service their technology needs in future. To date though, the hyperscale market has been dominated by the big cloud providers, colos and an elite few enterprises - all fuelled by a combination of data-driven consumer services and enterprise ‘XaaS’ demand.
For most people, the ‘hyperscale hype’ has been tied to the volumes of data generated by certain ‘hero of the Valley’ organisations. The whole business model of the likes of Facebook and Google is premised entirely on data and what it affords the company in terms of ad-led revenue generation. But the balance is shifting. By its nature, hyperscale describes a massively scalable compute architecture, and that’s exactly what most enterprises today need.
Cue the statistic on global data creation...
By 2025, the world will have amassed an incomprehensible 163 zettabytes (ZB) - so we’re told by IDC - of which 60% will be created by enterprises. While we’re used to our Silicon Valley heroes leading the charge on highly affordable, highly scalable data management, we know that demand will spread to a far wider range of organisation. Indeed the hyperscale bug has spread to banks, IT and telecom providers, pharmas, governmental organisations and numerous complex industrial use cases. This is driving demand across the US, Europe and Asia - such as the vast new facilities being created in India to meet the needs of banks and e-commerce giants.
Just look at predictions for smart cities and automated vehicles, and you have but a snapshot of the explosion in data. And yet the difficulty is that businesses have almost become too nonchalant about data creation. Data is the currency that comes at no cost (so it seems) - and lines of business create it, knowing it goes somewhere and that someone else deals with it. Chances are, if you’re reading this blog, you’re likely to be that ‘someone else’ - either wondering where you’re going to put the data, or hoping to sell floorspace to answer the problem.
Scaling your wallet
So we’re probably agreed that hyperscale is on the roadmap and where future budget will be directed. I say future, because at present, the hyperscale vision is easier said than done; the cost of actually delivering on a hyperscale strategy is prohibitive for many organisations. For example, Microsoft of said to have invested the better part of $20bn to build it Azure Cloud infrastructure. Certainly over time the cost of delivering a hyperscale model will come down - especially as the paradigm itself continues to evolve to embrace the edge (more on that in a future blog) - but for now it’s not for those who are lacking in the wallet stakes.
It’s in this context that Sami Badri, Credit Suisse’s oft-quoted data centre watcher, has said: “We’re in the middle of a major architectural shift...during this transition, there is a lot of opportunity for colo providers to take on a lot of workloads.” Good news for that top tier of organisations that have scaled out their footprint in readiness for the flood of demand.
Hyperscale as a commodity?
So what do we expect the colocation market to be focusing on through 2018 and beyond? Well for starters, the battle will be on for traditional enterprises to benefit from the kind of efficiency that the big cloud providers have enjoyed over the last decade or so. In other words, it will be the commercialisation (even commoditisation) of the hyperscale model. Optimisation will be a key part of any colocation making a success of the shift to hyperscale, and while I’m always cautious to use the word, this is likely to invoke the almighty power of ‘automation’.
The pace of change in the data centre risks running away with us, and automation will keep maturing apace as we seek to react. Layer in the constant need to address regulatory challenges (anyone mentioned the GDPR recently?), to deliver cost and emissions reductions, and to get ever-faster at handling workloads from the business. We’re going to need things to move terrifyingly quickly.
I’ll address this in more detail in an upcoming blog soon - but consider the machine learning use cases deployed by the likes of Google that will move into the mainstream, and investments from giants such as Intel and Nvidia. There’s little ambiguity on the direction towards automation.
And yet we’ve got also a seismic shift in direction to consider. Associated with this need to do more (and faster) in each facility will be the need to do more, in more locations. Edge computing is going to challenge the current hyperscale warehouse model to benefit from efficiency and scalability in locations closer to the point of consumption. The same efficiencies that have been seen as server and rack level, paired with ever-improved automation, will be needed to enable the edge vision to come to fruition. From customers and investors alike, eyes are on the colos as to how they prepare for this multiplication of operational sites. Watch this space.
Owning the big and the small
As ever the market is fragmented - between enterprises that need hyperscale capacity but can’t afford, and those that can. Between colos and cloud providers who are masters of scale, yet are challenged by locality. It’s an exciting time. Hyperscale is having to evolve from megasites in key markets, to more localised points of compute. I’d love to call it the HyperEdge but I suspect I’m getting carried away with myself.
We’ll be investigating these themes in our next few blogs. In the meanwhile, if you want to discuss hyperscale, edge or just swap notes on the market, we’d be happy to talk.
VERTIV COLOCATION eXCHANGE – YOUR RESOURCE FOCUSSED ON DATACENTRE DESIGNS AND INNOVATIONS TO HELP YOUR BUSINESS OPERATE AT PEAK PERFORMANCE