Cloud computing has become a reality, yet the hype surrounding cloud has started to exceed the laws of physics and economics. The robust cloud (of all software on demand that will replace the enterprise data center) will crash into some of the same barriers and diseconomies that are facing enterprise IT today.
Certainly there will always be a business case for elements of cloud, from Google’s pre-enterprise applications to Amazon’s popular services and the powerhouse of CRM, HR and other popular cloud services. Yet there are substantial economic barriers to entry based on the nature of today’s static infrastructure.
We’ve seen this collision between new software demands and network infrastructure many times before, as it has powered generations of innovation around TCP/IP, network security and traffic management and optimization.
It has produced a lineup of successful public companies well positioned to lead the next tech boom, which may even be recession-proof. Cisco, F5 Networks, Riverbed and even VMware promise to benefit from this new infrastructure and the level of connectivity intelligence it promises. (More about these companies and others later in this article.)
Static Infrastructure meets Dynamic Systems and Endpoints
I recently wrote about clouds, networks and recessions by taking a macro perspective on the evolution of the network and a coming likely recession. I also cited virtualization security as an example of yet another big bounce between more robust systems and static infrastructure that has slowed technology adoption and created demands for newer and more sophisticated solutions.
I posited that VMware was a victim of expectations enabled by the promise of the virtualized data center muted with technological limitations its technology partners could not address quickly enough. Clearly the network infrastructure has to evolve to the next level and enable new economies of scale. And I think it will.
Until the current network evolves into a more dynamic infrastructure, all bets are off on the payoffs of pretty much every major IT initiative on the horizon today, including cost-cutting measures that would be employed in order to shrink operating costs without shrinking the network.
Automation and control has been both a key driver and a barrier for the adoption of new technology as well as an enterprise’s ability to monetize past investments. Increasingly complex networks are requiring escalating rates of manual intervention. This dynamic will have more impact on IT spending over the next five years than the global recession, because automation is often the best answer to the productivity and expense challenge.
Networks Frequently Start with Reliance on Manual Labor
TCP/IP Déjà vu
A very similar scenario is playing itself out in the TCP/IP network as enterprise networks grow in size and complexity and begin handling traffic in between more dynamic systems and endpoints. The recent Computerworld survey (sponsored by Infoblox) shows larger networks paying a higher IPAM price per IP address than smaller networks. As I mentioned earlier at Archimedius, this shows clear evidence of networks growing into diseconomies of scale.
Acting on a hunch, I asked Computerworld to pull more data based on network size, and they were able to break their findings down into 3 network size categories: 1) under 1000 IP addresses; 2) 1k-10k IP addresses; and 3) more than 10k IP addresses. Because the survey was only based on about 200 interviews I couldn’t break the trends down any farther without taking some statistical leaps with small samples.
Perhaps it takes 30 minutes on average to find an address, allocate it, get a device configured, update the spreadsheet and update DNS. That was more manageable in a static world, though the increasing cost/IP to perform these tasks in larger networks is a direct consequence of manual systems breaking down in the face of scale. Now consider a 30 minute process for a device – or a virtual application instance – that changes IPs every few hours, or faster. When a 1.0 infrastructure meets 2.0 requirements, things start to break pretty quickly.
That is why, even with the simple act of managing an enterprise network’s IP addresses, which is critical to the availability and proper functioning of the network, cost actually goes up as IP addresses are added. As TCP/IP continues to spread and take productivity to new heights, management costs are already escalating.
This is a very fundamental observation based on one of the most common network management tasks. You can assume that there are other slopes even steeper because of complexity and reliance upon manual labor.
Some enterprises are already paying even higher expenses per IP address, and chances are they don’t even know it because these expenses are being hidden within network operations. Reducing headcounts risk increasing these costs further or making substantial sacrifices in network availability and flexibility.
IPAM as the Switchboard Metaphor
If something as simple and straightforward as IP address management doesn’t scale, imagine the impacts of more complex network management tasks, like those involved with consolidation, compliance, security, and virtualization. There are probably many other opportunities for automation tucked away within many IT departments in the mesh between static infrastructure and moving, dynamic systems and endpoints.
This will force enterprise IT departments into similar discussions as those which likely took place decades ago within the Bell System when telecom executives looked at the dramatic increase in the use and distribution of telephones and mushrooming requirements for operators and switchboards and offices and salaries and benefits. One can only imagine the costs and challenges that we would face today if the basic connection decisions were still made by a human operator.
The counterpart to the switchboard of yesteryear for IPAM is the spreadsheet of today. Networking pros in most enterprises manage IP addresses using “freeware” that has an ugly underside; it produces escalating hidden expenses that are only now being recognized, mostly by large enterprises. Mix the growth of the network with new dynamic applications and new factors of mobility with a little human error and you have a recipe for availability, security and TCO issues.
Many of these switchboards can probably be bought or manufactured today for a song, yet it is the other costs (TCO and availability and flexibility) which make them cost-prohibitive.
Server Déjà vu
Another one of the TCO fables that are similarly bound to take the steam out of cloud fantasies has to do with hardware expenses. The cloudplex will utilize racks of commodity servers populated with VMs that can scale up as needed in order to save electricity and make IT more flexible. That makes incredibly good sense, but are we really there yet? No.
Servers have a very large manual labor component, according to an IDC Report hosted at Microsoft.com. The drumbeat for real estate and electricity savings may play well to the bigger picture buyer; yet perhaps the real payoff of virtualization is its potential to automate manual tasks, like creating and moving a server on demand.
Virtualization security now risks becoming a metaphor for other technology-related issues that could slow down the adoption of virtualization in the lucrative production data center market.
Netsec Wasn’t Ready for Virtsec
The lack of network security connectivity intelligence meant that security policy, for example, would limit VMotion to within hardware-centric hypervisor VLANs. Network security infrastructure wasn’t prepared for the challenges of protecting moving, state-changing servers, despite the promise of a stellar lineup of VMsafe partners.
The promise of virtualization that drove VMware’s stock price into the clouds eventually met up with lowered growth expectations as deployments were impacted by the lack of connectivity intelligence that no doubt impacted other potential business cases for the unquestionable power of virtualization to someday unleash new economies of scale and computing power. These issues too will hit the cloud dream as they have also impacted other initiatives, albeit on a smaller and less visible scale.
Today there are plenty of new initiatives facing mounting pressures for connectivity intelligence and automation that have already left enterprise CIOs holding the bag for similar ecosystem finger-pointing. Whether or not we enter a global recession, these pressures will continue and likely worsen. They are artifacts of years of application, network and endpoint intelligence promises colliding with static TCP/IP infrastructure.
Saving money by cutting network operations or capital budgets is the equivalent of Ma Bell laying off operators or closing switchboards in the midst of unstoppable growth. Automation is the only way out, as Cisco’s Chambers hinted recently.
As much as cloud computing has rallied behind the prospect of electricity and real estate savings, the business case still feels like a dotcom hangover in some cases. Virtualization is still a bit hamstrung in the enterprise by the disconnect between static infrastructure and moving, state-changing VMs; and labor is the largest cost component of server TCO (IDC findings) and a significant component of network TCO (as suggested by the Computerworld findings). So just how much will real estate and electricity savings offset other diseconomies and barriers in the cloud game? I think cloud computing will also have to innovate in areas like automation and connectivity intelligence.
For the network to be dynamic, for example, it needs continuous, dynamic connectivity at the core network services level. Network, endpoint and application intelligence will all depend upon connectivity intelligence in order to evolve into dynamic, automated systems that don’t require escalating manual intervention in the face of network expansion and rising system and endpoint demands.
Getting beyond Infrastructure1.0’s Zero Sum Game
Whether you “cloudsource” or upsize your network to address any of a number of high level business initiatives the requirements for infrastructure2.0 will be the same. You can certainly get to virtualization and cloud (or consolidation or VoIP, etc) with a static infrastructure; you’ll just need more “operators”, more spreadsheets and other forms of manual labor. That means less flexibility, more downtime and higher TCO; and you’ll be going against the collective wisdom of decades of technologists and innovations.
This recession-proof dynamic gives the leaders in TCP/IP, netsec and traffic optimization an inherent advantage, if they can get the connectivity intelligence necessary to deliver dynamic services. They have the expertise to build intelligence into their gear as they have demonstrated. They just haven’t had the connectivity intelligence to deliver the dynamic infrastructure. Yet that is inevitable.
The Potential Leaders in Infrastructure2.0
Cisco is the leader in TCP/IP and has the most successful track record when it comes to executing in the enterprise IT market. Cisco has kept up with major innovations in security and traffic management as well, and it is likely to become a leader in Infrastructure2.0 as enterprises seek to boost productivity as their networks continues to become strategic to business advantage in an uncertain world economy.
F5 Networks has become the leader in application layer traffic management and optimization, thanks to its uncanny ability to monetize the enterprise web, or the enterprise initiative to deliver its core applications over the WAN and Internet. Their ability to merge load balancing with sophisticated application intelligence positions them to play an important role in the development of dynamic infrastructure.
Riverbed has come on the scene thanks to its ability to optimize a vast array of network protocols so that their customers could empower their branch offices like never before. While many tech leaders focused on the new data center, Riverbed achieved stellar growth by focusing on the branch office boom enabled by breakthroughs in traffic management and optimization. It was a smart call that has positioned Riverbed to be a leader in the emerging dynamic network.
Infoblox is the least known of the potential I2.0 leaders. It is a private company that already counts more than 20% of the Fortune 500 as customers. Its solutions automate core network services (including IPAM), enabling dynamic connectivity intelligence for TCP/IP networks. (Disclosure: I left virtualization security leader Blue Lane Technologies in July to join Infoblox, largely because of their legacy of revenue growth, sizable customer base and the promise of core network service automation.) Infoblox’s founder and CTO is also behind the IF-MAP standard, a new I2.0 protocol that holds promise as a key element for enabling dynamic exchange of intelligence among infrastructure, applications and endpoints (think MySpace for your infrastructure).
VMware is executing on the promise of production virtualization and clearly now has the most experience in addressing the challenges of integrating dynamic processing power with static infrastructure. I think the biggest challenges for VMware will be regarding how much it has to build or acquire in order to address these challenges. Not all of its technology partners are adequately prepared for the network demands of dynamic systems and endpoints. VMsafe was a big step forward on the marketing front, but partners have been slow to execute virtsec-ready products.
Google has no doubt benefited from the hype surrounding cloud computing. They’ve been investing in cloudplexes and new pre-enterprise cloud applications. While I do have reservations about their depth of infrastructure experience (versus the Nicholas Carr prediction of the eventual decline of enterprise IT) I think one would be hard pressed not to include them as a player driving requirements for a more dynamic infrastructure.
Microsoft has recently become more vocal on both virtualization and cloud fronts and has tremendous assets to force innovation in infrastructure, in the same way that its more powerful applications have influenced endpoint and server processing requirements. They are likely to play a similar role as the network becomes more strategic to the cloud.
There are no doubt other players (both public and private) that promise to play a strategic role in this next technology revolution, including those delivering more power, automation and specialization around network, endpoint and application intelligence as well as enabling more movement and control in virtual and cloud environments. All are welcome to join the conversation.
These leaders are well positioned to play a substantial part in the race to deliver Infrastructure2.0; and strategic enterprise networks promise to be big winners. The dynamic infrastructure will change the economics of the network by automating previously manual tasks and will unleash new potentials for application, endpoint and network intelligence. It will also play a major part in the success or failure of many leading networking and virtualization players as well as enterprise IT initiatives during periods of economic weakness and beyond. Infratsructure2.0 is the next technology boom. It is already underway.
Greg Ness is a Senior Director at Infoblox. He was previously at Blue Lane Technologies, Juniper Networks, Red Line Networks, IntruVert/McAfee and ShoreTel. He has been a blogger at Always On since spring 2004. This is the third article in a series on Infrastructure2.0. For more information and a disclaimer go to: Archimedius. This does not constitute investment advice. The author can be reached at gnessatinfoblox.com.
[…] Cloud Computing, Virtualization and IT Diseconomies […]
By: Cloud Feed » Blog Archive » Daily Cloud Feed - Oct 19, 2008 on October 19, 2008
at 11:45 pm
Great post. It would be interesting to see if diseconomies are also present at layer 1 (my gut says “yes”) and their magnitude. Do you consider layer 1 to be a problem area as well? I would love to hear your thoughts.
By: jgannonwp on October 30, 2008
at 4:28 am
[…] in convential non-virtualized infrastructure, I see a tremendous need for it in our move to Infrastructure 2.0 with virtualization and Cloud […]
By: Wherefore art thou TCG IF-MAP? « Security For All on November 11, 2008
at 8:41 pm
[…] Cloud Computing, Virtualization and Diseconomies of Scale […]
By: Infrastructure 2.0: Recommended Blog Reading as of December 10 « ARCHIMEDIUS on April 25, 2012
at 6:08 pm