If you drew a triangle and placed Cisco, Microsoft and VMware, respectively, at each corner you would have a good idea of where the center of power is regarding the future direction of the IT industry and the emergence of cloud computing. And plenty of well-heeled tech companies would love to keep this status quo in tact for as long as possible, simply because they’ve run out of steam and are counting on complexity and lock-in to postpone the revolution.
Outside each point, of course, you could list competitors who are potential partners or disrupters, including HP, F5 Networks, Citrix, Juniper Networks, Amazon, Google and IBM. They are missing a few critical components to reach into the enterprise and disrupt the triumvirate, but they have the incentives to insert themselves and their own powerful spheres of influence, from technology to business model.
Outside those spheres of influence you could list category players who have survived or who threaten because of their best of breed status. They are there because they can transform a particular category (like security, management, Ethernet switching or WAN acceleration, etc.) but not the whole game itself. Think Silver Peak in WAN; Palo Alto Networks in security; and Arista Networks, etc. in switching.
Yet if you were to draw a circle inside the triangle you would identify a “no vendor’s land” or a kind of meat space of manual processes, configurations, scripts, spreadsheets, committees, checklists, etc. pretty much centered around the increasingly complex and growing network. It is in this area where most of the cost, risk, and inflexibility that we associate with pre-80s era business practices are still the norm today.
This is IT’s land of opportunity, driven by the need for enterprises to continue growth and vendor’s need to grow their markets as management costs for each of their wares increase. This is where the next disruption may occur. We can look to recent history for similar examples.
In effect, VMware’s grand entrance into the data center market came at the expense of an old guard of server and software players who couldn’t wean themselves off the “escalating management and complexity” bandwagon fast enough. VMware established a $20 billion market cap by disrupting empires of complexity and manual labor and automating server management tasks.
That is why this is where IT automation will go next, in the core of the network infrastructure. IT automation requires network automation. Without it you’re stuck with increasingly complex and inflexible VLANS. Yet network automation, the next frontier for IT automation, will require network infrastructure automation.
As I said at last year’s Future in Review panel on infrastructure 2.0: “Today’s networks are run like yesterday’s businesses”. Wednesday I plan to address these issues again with senior executives from Cisco, VMware, Microsoft, etc during this year’s follow-on Future in Review 2010 panel on i2.0.
We’ve seen this all before, except at a different layer in the OSI stack.
The Application Front End Boom
Five years ago we watched the application delivery space boom as the result of the new demands on IT, especially the network. Enterprises spread their IT assets into smaller regional and branch offices and networks evolved to support the transport of applications initially designed for LANs (local area networks).
Initially network teams invested huge amounts of time simply managing a host of new problems when enterprise apps slogged their way through unprepared networks. That time, expense and delivery pressure justified billions in new market caps and acquisitions.
Today we’re in the midst of yet another cycle of capital creation, and this one might dwarf the (OSI) layer 4-7 application delivery boom. I think the next boom will take place at layers 2 and 3, in the “meat space” of manual processes where a great portion of IT costs, delays and risks remain.
The layer 2-3 drivers (around physical and logical addressing, connectivity, path determination) are all under increasing pressures due to the rapid increase in network-connected devices and increasing change, due to the nature of those devices including virtual machines. I wrote about the three horsemen in February 2009: 1) notebook computers; 2) virtualization and 3) cloud computing.
Yet the problem is even bigger than the success of netbook computers, and extends into manufacturing and medical devices, ATMs and even SCADA devices never connected to the network. The case for connectivity is so strong that the network teams will continue to face increasing pressure over the foreseeable future.
Cisco’s CTO recently predicted 1 trillion devices connected to the network by 2013.
Years ago the phone companies ultimately automated their equivalent of layer 2-3 pros (called telephone operators) because networks became so large and complex that they couldn’t throw enough bodies at the problem. Today large enterprises are starting to feel the pressure, in terms of rising operating expenses, inflexibility and outages.
Two recent blogs caught my attention, both centered around the need for networks and IT to evolve from the “dumb network” idea. Rick Kagan (from my employer Infoblox) recently blogged about his Interop panel on network evolution, and it wasn’t pretty:
The network is behind, way behind, when it comes to delivering the strategic benefits of cloud computing in terms of dynamic, flexible movement of workloads among computing centers. And from what I saw at Interop, it’s about to get worse – potentially, much worse.
Richard Kagan, GM, Infoblox Orchestration BU
Many of the vendors in the “Why Networking Must Fundamentally Change” felt like they wanted to hearken back to the glory days of mainframe computing:
Yes, that’s right: At current course and speed, you’ll be able to get all of the cloud bursting and DR and resilience that you want, as long as you buy everything from one networking vendor and/or use cloud providers that also use the same networking vendor as you do. It was other-wordly to be hearing this at, of all places, Interop. There should have been a riot, but there wasn’t. In fact, during the 1-1/2 hour discussion, not ONE of the networking vendors even uttered the word “cloud” – and no one seemed to care.
Richard Kagan, GM, Infoblox
Also on Richard’s panel was Doug Gourlay from Arista Networks. His recent blog relayed essentially the same message, citing the breakdown in scale for the enterprise data center:
Again, most network switching equipment was designed for campus e-mail distribution. As such laptops come and go, and so do desktops. Nobody wants to keep rigid control over what port a laptop plugs into so the LAN was designed to be as ‘plug and play’ as possible. MAC address auto-learning and flooding, DHCP, speed and duplex auto-negotiation- these all combined to make it so I can plug my laptop in just about anywhere, get an address, and do my job.
In the data center, especially for the largest data centers in the world, this may not be the case anymore. These features that made life simple, simply do not scale economically any more- as they force the network into a hierarchy that means significantly sub-linear price/performance. In fact it is often cheaper on a per server basis to run a small network than a larger one: something no operator wants.
Doug Gourlay, VP Arista Networks
The lack of automation and rising unit management costs in larger IT environments takes us back to the case made by VMware years ago regarding rising server management costs as data centers grew. Now networks are in the same opex quicksand: the more hardware you throw at a problem, the more expensive each element is to manually manage.
The key question is whether or not the power players of today will be able to force hegemony by “dumbing down” the network; or will there be another boom, this time at layers 2 and 3 in the OSI stack.
That is why I think the Infoblox (my employer) acquisition of network change and configuration management player Netcordia is particularly interesting. Note: Both EMA and IDC have recently published their takes on the acquisition, and a Gartner report is also likely.
This acquisition will catch many in the industry by surprise. Most of the NCCM sector consolidation has happened at the hands of large management platform/suite vendors, such as IBM, BMC, EMC and HP. But it would be a mistake to underestimate the potential of this combination merely on those grounds. There is a degree of common purpose and synergy here which far surpasses what in some cases has been little more than a land grab to cover and control network management budgets.
EMA Impact Brief, 2010
Infoblox IP address management (or DDI) integrated with former Netcordia’s NetMRI is likely the first set of network solutions with the potential for closed loop automation between the address space and the network. That connectivity intelligence creates at least the promise of intelligent bindings between devices, policies and networks, and sets the stage for the automation of that expensive, error-prone meat space at the center of IT’s power triangle, in between Cisco, Microsoft and VMware.
Surround that network infrastructure automation engine with the connectivity power of IF-MAP and you now have the capacity to spark a revolution in the networking industry and set up a new capital investment refresh cycle that has more to do with the stresses of connectivity than the speeds and feeds of switches and routers.
That may explain why the upcoming Virtual Interop (May 20) is on Infrastructure 2.0. If the networking industry doesn’t solve the rising management cost issue, someone like VMware might create another $20 billion market cap, this time from the mainframe-era fantasies of visionless networking players.