Two key industries industries face massive disruption in the coming years as IT becomes ever more strategic to enterprise operating advantage, and IT infrastructure scales to new levels of complexity and dynamism. The future of IT will likely be created by those who accelerate the pace of automation and who build more energy-efficient, scalable data centers.
Setting the Stage: Clouds and Networks
A few years ago ARCHIMEDIUS covered a cloud computing war shaping up between emerging private cloud players like VMware and Cisco and the public cloud players (Amazon and Google, etc.) who at the time owned most of the cloud mindshare. This high level mindshare battle distracted many from critical issues, not unlike how the dotcom milieu had a way of masking business issues.
There were plenty of discussions (in 2008) about the networking issues that are today manifesting themselves in the cloud in the form of service outages; but in previous years these discussions took a back seat to the billowing, amorphous and yet captivating world of cloud marketing excess.
None of the public cloud leaders talked about the network and the collision of expectations that create the preconditions for the new phenomenon of cascading IT failures driven by pesky nits like an early morning configuration error. The answer I heard over and over again on various panels went something like this [I paraphrase]: “We have it all handled behind the curtain.” It wasn’t.
Background Resources on Cloud Outages
Some of these particularly notable outages were caused by simple, yet destructive manual network configuration errors. That makes the recent Open Networking Summit a development worth watching, as it introduces a new opportunity for automating tedious and risky networking tasks that could slow the progress of virtualization in production environments. It could be a private cloud powerhouse and open up the promise of virtualization in production.
Network-related Storms in the Clouds
In addition to high profile Amazon and Microsoft outages, RIM recently experienced a massive network outage. See an interesting take from Infoblox’s Matt Gowarty at the Infrastructure 2.0 blog:
For the RIM outage, maybe a non-standard configuration or unplanned changed caused the back-up failure. If RIM fixes today’s problem, but doesn’t start proactively approaching and testing network configuration changes, they will risk another outage and the question is not if it will happen again, it’s when.
Similarly, as public clouds scale to ever higher heights and become ever more dynamic, you can expect to see ever more outages. The industry is “architecting the plane in flight”; cloud expectations were set before IT was ready to support them and today’s clouds are often enabled with new tools developed internally or furnished by a growing list of cloud management and security startups being tested and deployed in ever faster cycles with “proportional” testing.
The Scalability Challenge
IT infrastructures are being built on scales that few tools and people have been adequately tested upon. This creates risk and opportunity for a growing list of companies playing in the private cloud space.
The importance of the network to the cloud is finally starting to get increased coverage, which wasn’t the case in 2008, when a few were stirring a pot that few people cared considered as important as topics like the cloud definition debate. Trevor Pott, for example, recently raised the network issue in The Register (Cloudy challenge looms over networking):
With the exception of a few corner cases, processors are fast enough (and have enough cores) to suit everyone’s needs. RAM is dirt cheap, and for the second time in a row, Microsoft is delivering a Windows operating system that requires less RAM than the one before.
Storage costs have recently done that plummeting thing – and cheaper drives means more of them Raid-ed together provide even greater throughput.
What is left is networking. The cost of the servers to accomplish a given task has dropped so precipitously that most networks are at or near capacity, so this is where many refresh budgets will go.
Cisco’s James Urquhart raised the specter in this Cisco blog from 2008:
The next frontier to get explored in depth in the cloud world will be the network, and what the network can do to make cloud computing and virtualization easier for you and your organization. I look forward to sharing the excitement with you as the story unfolds.
The bottom line questions for tech investors: 1) which public and private players are the best positioned to tackle these issues and grow revenue; and 2) which are at the greatest risk of being devalued by innovation and network automation?
Wait… there’s more.
The Data Center Power and Cooling Squeeze
When it comes to the necessary evolution of cloud computing there is also a data center issue emerging, driven by increasing power and cooling demands within the confines of out-of-date facilities and rebuild cycles hampered by a shaky capital market. While there are millions of square feet of data center space today, most of these data centers were built when there were far fewer personal computers were simply exchanging email across their networks, before the rise of the enterprise web.1
No one seems to have a clear picture of how much of the existing enterprise data center space (millions of square feet) has outlived its economic usefulness. There is evidence of a shift to larger and more energy efficient data centers thanks to the increasing importance of the server to the enterprise and the deployment of more powerful servers; but no one seems to have estimated with any precision just how many outdated data centers there are, especially on a regional basis.
On a recent analyst tour, tech analysts were convinced that data centers built more than seven years ago are likely to be obsolete by today’s energy efficiency standards, while opinions are mixed among data center REIT analysts with broader real estate coverage responsibilities. For a single enterprise-class data center (say 15k+ square feet and drawing about 3 megawatts of power) obsolescence can add more than $1 million per year in power and cooling expense.2
Since 2004 the sheer number of devices connected to the internet has mushroomed by more than 300%, according to The ISC Domain Survey | Internet Systems Consortium. More importantly, more of these new devices are communicating with servers for common tasks, versus the hard drive-centric computers of the PC era.
The power and cooling demands on data centers increase dramatically as more devices are accessing more software and services from servers. It is a logarithmic increase (more devices, more types of apps/content/services from a shrinking population of more powerful and efficient physical servers [and more virtual server instances]). A data center built more than 7 years ago, before the shift to server-centric computing, is likely to be obsolete and eroding TCO with wasted energy.
Bottom line: The server and the data center has become a new critical threshold for availability and strategic operating advantage, just like routers, switches, firewalls, application front ends became strategic as computing entered new delivery eras.
The energy efficiency designed into newer and more powerful servers is not enough to reduce the overall need for power and cooling in the data center; it just reduces the growth that otherwise might occur with a mere refresh of comparable equipment.
See The Data Center is the Server… for a more detailed review of this discussion, including the shift to more server-centric enterprises and business models:
Most data centers in use today are not particularly innovative and, like filling stations with leaky hoses on their pumps, waste about half of the power and cooling intended for the IT infrastructure. Many are also running out of headroom for growth, are located in or maintain poor work environments and are losing their value as critical business assets, ironically at the time when IT applications and services are becoming increasingly critical to profits and valuations.
Factors Driving a New Wholesale Data Center Model
As IT infrastructures approach 1 MW of power consumption, the facility itself –and how it is architected from an electrical, mechanical, design standpoint- becomes a material financial interest to the enterprise. In many older facilities more power is consumed merely cooling the data center than powering the IT equipment itself. As data centers grow there is a business imperative to address waste and inefficiency and customize the facility to unique application, infrastructure and service requirements.
Because companies using out-of-date data center space can be wasting millions of dollars per year on unnecessary energy costs, there is a natural tendency to consolidate racks and stacks out of closets and older buildings into larger and more efficient buildings specifically designed for IT demands and business objectives.
Because of the increasing importance of IT competitiveness and operating advantage, enterprises are exploring wholesale data center leases instead of outsourcing or leasing retail colocation space or even building their own advanced facility, which today is a riskier and costlier proposition. Wholesale space allows the enterprise to customize the data center to their unique IT environment demands while maintaining complete control over their critical IT assets and personnel. Enterprises get substantial savings in power costs, reduce their carbon footprint and leverage REIT money. For more information read Wholesale Data Centers in the News.
Larger, More Efficient Data Centers
A recent Gartner research report “Table (1-1) Data Centers by Size and Region 2010-2015” signals this trend, with shrinkage in the “rack/computer room” and midsize data center segments and growth in the enterprise and large data center categories between 2010 and 2015.3
As private clouds evolve to offer similar economics and elasticity to public cloud alternatives and enterprises evolve to be more software and services-centric, the data center becomes even more strategic to the bottom line and competitive operating advantage.
Another challenge is scalability. Most facilities are provisioned for horizontal scalability, or the notion that IT growth (or more power consumption) requires more space or construction of another facility. More advanced wholesale facilities are pre-provisioning the data center for higher loads within the same space (called “vertical scalability”). This substantially extends the life of the data center.
Obsolete networks and management practices as well as obsolete data centers may be two of the most significant impediments to the promise of the public and private cloud as well as the spread of virtualization. They will also set the stage for a new generation of startups to disrupt once comfortable status quos. Those who tackle the network and the data center strategically stand to reap strategic benefits as IT evolves into an even more important role in the enterprise.
1) The rise of the enterprise web (applications being delivered across WANS, etc.) drove massive increases in application delivery equipment shipments (FFIV, RVBD, CSCO), not to mention more than $1 billion in acquisitions (Redline, NetScaler, FineGround). More traffic to servers in data centers, but not nearly the explosion in endpoints we’re seeing today.
2) Based on power pricing in Santa Clara, California. In other markets the impact may vary due to local power costs.
3) You can find it via Gartner’s portal: Forecast: Data Centers, Worldwide, 2010-20157 Oct 2011 | ID: G00221544By Jonathon Hardcastle.