Posted by: Greg Ness | December 19, 2009

Virtualization, Clouds and Meta Orchestration

One of the most powerful drivers of virtualization is the flexibility enabled by the decoupling of applications from hardware.  In essence that decoupling enables unprecedented flexibility, management and automation, all within the confines of virtualized local area networks (vLANS).  It is likely one of the most significant recent developments in the IT industry.

We’ve compared this decoupling to the rise of the steam engine more than one hundred years ago, but perhaps that vision wasn’t big enough.  The ability to automate the creation and inter-vLAN/offline movement of servers is so powerful that some have suggested that servers could become obsolete in emerging cloud environments. 

Virtualization makes networks strategic again.

Networking leaders like Cisco and F5 Networks see networks taking virtualization to the next level by enabling even higher levels of IT orchestration.  Why stop at the VLAN border?  Imagine the power of a completely dynamic, fluid infrastructure of data centers that can act as a single logical network and take advantage of even momentary changes in power expenses/requirements, users or even avoid predictable natural disasters.

These new infrastructures are run by policy instead of manual configuration.  They are automated.  They are the IT supply chains of the future.

If you could deliver on this promise of meta orchestration you could dramatically reduce the expense of delivering IT services.  Electricity alone can be 60% of the operational expense of a data center.  Imagine VMs (virtual machines) programmed to chase inexpensive off-peak power around the world, or moving to a data center(s) with more capacity to address a demand spike.

Infrastructure 2.0 is not just about automation, but rather is about the orchestration of processes, which are actually two different things: the former is little more than advanced scripting, the latter requires participation and decision making on the part of the infrastructure involved.”

– Lori MacVittie, F5 Networks

Today’s tired status quo of legacy configurations between a myriad of software, systems and networks are one of the key barriers to the de facto adoption of virtualization because they ultimately limit the efficient utilization of IT resources, including power and people.  They confine the power of flexibility and automation into increasingly dense, isolated networks

Every manual process and configuration between them represents a cost and/or delay and other wasted resources. It also forces architects to build more capacity (overbuild) into each data center, to address potential usage spikes or seasonality.  Note: Amazon seemingly got into the cloud business by leveraging that extra capacity.

Today’s Pony Express of manual configurations (infrastructure 1.0) is still required to keep data centers functional.  As the physical servers are replaced the old processes still remain.  Yet if these virtualized networks can be automated and orchestrated along with physical networks, the payoffs would transform the economics  of IT.

How did we get here?

Decades ago TCP/IP set the stage for eventual connectivity between millions of networked devices.  More recently virtualization migrated from development and test environments to production data centers, creating unparalleled system automation within pockets across thousands of data centers. 

That automation is now trapped because connectivity isn’t enough to unite/orchestrate all of the applications, networks, solutions and systems connected by TCP/IP.  That connectivity has been patched together piecemeal over decades with layers of custom configurations maintained with manual processes. 

One prominent cloud VP told me recently that it cost him $700+ to simply move a server (that costs $1100 to purchase new) because of the complexity and limitations inherent within the physical data center infrastructure.  That also costs millions in lost business, because of the excessive delays inherent in delivering new services (from complex environments) or the inability to service user spikes.

In some businesses margins are slim or “free services” introduce scattered, micro expenses to operations.  Inflexibility and outdated infrastructure can sometimes hide legacy expenses or lost revenues in a wide array of cost centers. 

What is needed today?

 A new abstraction layer is now needed to allow TCP/IP to be orchestrated across diverse environments (physical and virtual) and great distances.

Today’s IT infrastructures are larger and more complex than anyone expected back in 1983 when the Arpanet was migrated to TCP/IP.  With virtualization came unprecedented potentials for change/movement and increasing pressures on an already tired status quo.  Soon after, cloud computing models started popping up architected for the latest demands, applying more pressure on the tired, complex world already falling behind.

As cloud computing spreads, these tired status quos will have to either evolve (e.g. more elastic private clouds) or be replaced by more efficient and elastic public cloud services. Bechtel’s recent coverage in CIO Magazine gives us an insight into the future direction of enterprise IT:

We operate “as a service provider to a set of customers that are our own [construction] projects,” Ramleth says. “Until we can find business applications and SaaS models for our industry, we will have to do it ourselves, but we would like to operate with the same thinking and operating models as [SaaS providers] do.” 

– Geir Ramleth, CIO, Bechtel, quoted in CIO, Oct 2008

The Era of Meta Orchestration

And that evolution will likely require a kind of meta orchestration, probably enabled by a new layer of IT metadata.  Cisco’s Chris Hoff blogged about IF-MAP last year and raised some interesting meta orchestration potentials:

“While IF-MAP has potential in conventional non-virtualized infrastructure, I see a tremendous need for it in our move to infrastructure 2.0 with virtualization and cloud computing… Integrating, for example, IF-MAP with VM-Introspection capabilities (in VMsafe, XenAccess, etc.) would be fantastic as you could tie the control planes of the hypervisors, management infrastructure, and provisioning/governance engines with that of security and compliance in near-time.”

Chris Hoff, Rational Survivability, Nov 2008

As the private and public cloud alliances form, expect all camps to eventually have a metadata play, as metadata will enable new IT orchestration capabilities.  These capabilities will drive IT costs substantially lower.

Thiele’s Mesh

Operations Director/CTO Mark Thiele talked about how smaller networks of data centers could be built at a fraction of the cost of an equivalent “Fort Knox” in September on an Infoblox-sponsored Nemertes Webinar on Virtualization and the Future of the network.

Today’s regional or branch office may indeed have more resemblance to the data centers of tomorrow than the massive “Fort Knox” cloudplexes being built today by the likes of Google and Amazon.  These data centers are concentrated today because of the power, intelligence and orchestration limits inherent in today’s networks.  Mainframes were “all that” until TCP/IP (infrastructure 1.0 or basic connectivity).  Cloudplexes will be “all that” until infrastructure 2.0 (dynamic, policy-driven connectivity).

Others May See IT Evolving Differently

Certainly the system-centric and chip-centric vendors (like Intel or IBM) might have a different view, anticipating that IT orchestration will evolve, but only in ever-denser isolated pockets, driven by system or hardware breakthroughs.  The network is perhaps a mere sideshow, not a strategic element.

Intel’s recent cloud on a chip announcement seems to be betting on such a direction, toward physicalization (centralized processing) of servers versus virtualization.  By making concentrated data centers more energy-efficient and tactically flexible, Intel is making a case for an evolved, increasingly dense, server-centric future.

Who knows how these competing visions will play out, but one could have been surprised by the advance timing of Intel’s cloud chip announcement. 

If virtualization crosses the chasm in two years, physicalization may have an uphill battle against virtualization (as the de facto standard).  If the network players (Cisco, F5, Juniper, etc) unleash infrastructure 2.0 then cloud chips may target smaller data center meshes for adoption; the data center cloud becomes a collection of boxes (perhaps racks) scattered around the world.

The Density Disadvantage

In an infrastructure 2.0 world, density can mean disadvantage.  It’s all about having physically distributed processing power behaving as it is in a single location, yet dynamically optimized to respond to changing conditions.  That’s why I think it is interesting that Intel’s cloud chip advance is pre-announced, possibly to influence customer cloud expectations:

“All of this [Intel cloud chip advances] is still in the research phase, but it’s at least far enough along to begin talking about. And given that many data center managers have been around long enough to remember the advantages of centralized processing, it probably won’t come as too much of a shock.”

– Ed Sperling, Forbes, Dec 2009

Perhaps Intel needs to offer a competing vision to the likes of IBM (dynamic infrastructure) as well as Cisco’s Acadia venture with EMC, and other cloud container partnerships likely yet to be announced.  So they enter the cloud conversation with a futuristic, powerful and flexible chip.

Today’s networks certainly aren’t ready to unleash the full potential of virtualization, but when they are it will promise a new age of IT orchestration.  Disparate and complex IT islands enabled by layers of applications and operating systems, multiple versions of hardware, specialization tied to hardware, applications and networks all maintained and tracked manually will be replaced by a new layer of meta orchestration and new applications designed to allow processing power to be efficiently delivered as needed based on policies, compliance and real-time developments.

In the meantime, network administrators still rely on spreadsheets to keep up with network IP addresses.  So those massive cloudplexes promise to be around for some time.

I am a senior director at Infoblox.  You can follow my rants in real-time at www.twitter.com/Archimedius .

Advertisement

Responses

  1. […] here: Virtualization, Clouds and Meta Orchestration « ARCHIMEDIUS Plurk This Post Delicious Digg This Post MySpace Ping This Post Reddit This Post […]

  2. […] rest is here: Virtualization, Clouds and Meta Orchestration « ARCHIMEDIUS Plurk This Post MySpace Ping This Post Stumble This […]


Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Categories

%d bloggers like this: