Google is one of the few dotcom-era companies that truly exceeded investor expectations. Imagine anyone so brazen a couple decades ago to think that an advertising/directory company could grow to threaten the world’s global tech giants. Yet here we are today with Google stealing the tech thunder with an awe-inspiring vision of cloud computing that perhaps now exceeds its own ability to deliver.
While I wrote last week about the looming network infrastructure requirements of cloud computing I thought it would make sense to continue with this theme and offer a contrarian vision to the one championed by both Google and luminary Nicholas Carr; one that talks about critical missing ingredients that could mark significant barriers to entry for Google.
I think there are at least 4 ominous slices in Google’s pie in the sky for enterprise IT and Google: 1) Google and Carr’s vision significantly underestimates today’s state of the enterprise network; 2) Google insists for the most part on building everything internally; 3) the networking hardware players are established and better positioned to monetize the increasing demand for network intelligence; and 4) ASP2.0 has now morphed into a form of IBMs early “glass house” thinking.
Now that I’ve attacked the position of one of the world’s most successful technology companies and the author of one of my favorite summer reads allow me to make my case.
The Besieged Network
Over the last three decades we’ve seen the IP network grow from 56k links between thousands of endpoints to today’s gigabit network connecting hundreds of millions of hosts. That level of scale alone is enough to suggest adoption beyond anyone’s wildest dreams, especially those involved in the ARPANET project.
Add to that scale increasing complexity, as more than a hundred protocols/services (many of which were developed independently with no consideration of interoperability) are now traversing this network back and forth in between an exploding array of increasingly mission-critical devices.
I think this is level of complexity is what Carr doesn’t get. It is one matter to get the water company to deliver a consistent pressure of water to a faucet, quite another to deliver various flavors on demand at various pressures at the push of a button. With electricity there is a standard current in most countries, not the ability to customize on demand. Yet enterprise networks, devices and applications interact at a much higher scale of complexity and customization and authentication.
The following chart was inspired by an industry peer in the networking industry who had also noted today’s increasing gap between network demands and resources. This gap could worsen if IT spending goes flat year to year:
Let’s give Google some credit and suggest that they can do everything they are suggesting they will be able to do; that is, deliver the data center in the cloud for enterprises and the general public. We’ll concede Carr’s prediction that they will be able to deliver on the promises of a scalable, complex, dynamic cloudplex that can deliver robust, secure and highly available multi-host services to ever-changing populations of users and enterprises at a low cost. We’ll also concede that Google can transform itself into a software company and threaten Microsoft.
If Carr and Google are spot on, then what happens to the trend lines above? They get even worse.
If enterprise data centers shrink because more offerings come from the cloud there will still be more endpoints, more interoperability, more change, plus more traffic through the network at the “last block”. Offloading applications to the cloud is likely to actually INCREASE traffic and complexity.
As I mentioned last week, many of these protocols and services were never designed with the intent of interoperating with one another (unlike much simpler electrical currents). Most of the people and resources in the network are preoccupied with delivering these services to the right location. Cricket Liu (the author of O’Reilly’s DNS and Bind) has already said that core network services like DNS are “creaking a bit”.
In an upcoming bloxTV interview he talks about the need for increased flexibility in order to address new DNS security developments. These new demands are bound to make network professionals jobs even more challenging in larger, complex networks that can encompass multiple sites, data centers, mobile users, partners and even factory floors.
The Smoking Gun or Carr’s Paradox
We’re now also seeing other signs of trouble in the network. An October 2008 report(conducted in August/September and sponsored by Infoblox) of those managing these core network services (DNS, DHCP, IP address management, etc.) is reporting that the management cost per IP address actually INCREASES as networks scale to a certain size. Larger networks were also less flexible, as they were slower in patching their DNS servers for the widely publicized DNS vulnerability. I’ll be adding that report to Archimedius shortly.
Until this report I think many would have assumed that network infrastructure management would scale as each IP address would consume fewer resources per address, and more endpoints would obviously mean a lower (or at least static) cost per endpoint. This wasn’t the case:
Enterprise organizations have higher costs per IP address, with an average of $9.19 annually. SMB organizations, on the other hand, report an average annual cost of $7.12 for each IP address. The overall annual average among all organization sizes is $8.10. – Computerworld MarketVibe Oct 2008
Again, this complexity tax could slow down the evolution to cloud, since rising management could offset tangible savings in electricity and real estate and perhaps hypothetical synergies enabled by the cloudplex. I’m not disputing the eventuality of cloud computing; that would be silly since it’s already here. I am disputing its rate of adoption, especially for robust enterprise apps and Google.
Microsoft is in no position to rescue the Google or Carr paradox either. The current installed market of empowered endpoints with hard drives, etc. (other people’s money) allows them to monetize software and operating systems without having to invest in owning other people’s hardware. They don’t need to buy servers; and they already have the applications and customer base.
Microsoft will likely use its position of strength at the OS and application level to deliver equivalent pre-enterprise cloud capabilities that simply add value to their installed base; they’ll simply out cloud Google. This is a very similar strategy to the potentially lethal Hyper-V attack on VMware I’ve discussed at Archimedius.
At the end of the day Google is making the leap from monetizing browsers on other people’s hardware to delivering applications from their own server hardware. Yet Google is neither an application nor a data center hardware company; it is a search and directory company driven primarily by ad revenues. This has more significant implications because of other strategic decisions they’ve made.
The World’s Glass House
Google has a reputation for secrecy and building key technology inside. By not effectively leveraging the power of existing off-the-shelf solutions they embark down a higher risk path and absorb more R&D expenses. Given that they want to scale larger and deliver software at a lower cost than enterprises that have decades of data center DNA, it might take more than cheap electricity, real estate and VMotion to come anywhere close to the margins they get for search.
A weaker economy likewise could be disproportionately painful for their cloud strategy, because a bulk of their revenue is tied to more discretionary ad spending. Building it from scratch might take longer and be more expensive than partnering more aggressively with other technology leaders who bring faster expertise that is already being monetized over a broader base of buyers.
As various enterprises move toward cloud computing via pre-enterprise vertical or horizontal applications with light and simpler throughput requirements, wouldn’t it make more sense for Google to return to its core vision and simply become a host for other people’s applications and use other people’s money?
Within this scenario they could charge an application management and delivery fee and enable a new “lite” application service renaissance. They become a service provider instead of a more classic ASP leveraging proprietary development. They wouldn’t have to invest in application development and they could compete (with various partners) more squarely with the Microsoft ecosystem. That might put them directly at odds with the likes of Verizon or AT&T and even Amazon, but it would allow them similar economies to the search business and allow for faster growth.
As I mentioned last week, I think cloud computing will require an unprecedented level of network intelligence. If enterprises decide to cut spending on network resources, the pressures for automation will only escalate. That’s why I think Cisco, Juniper, Brocade, Riverbed, Blue Coat and others are in a stronger position to monetize the clouds.
The network gear players have already established the infrastructure. Automation and management become another add-on, allowing them to continue their momentum in a weak or strong economy. Google, however, is buying their own software and hardware footprint for the most part and is sensitive to advertising revenue trends which will likely make their ability to maintain revenue growth (and R&D investments) more dependent on a healthy economy.
Cloud computing may help trim some data center costs and move a few jobs (“up sourced”) to the clouds but that may be a much more discretionary decision than rising network strains and availability expenses. Think of the network infrastructure as a kind of beach head for cloud computing. Without a reliable, viable infrastructure the clouds will be out of reach of most potential customers.
Is the Cloud really ASP2.0?
We watched the dotcom bubble finance all kinds of eyeball valuations. Google made the model work by monetizing eyeballs beyond anyone’s imagination; they made search the ultimate software-as-a-service and built an incredible product. Their search engine is uncannily powerful and accurate.
That being said, can they do for the ASP industry what billions in market cap and abundant expertise couldn’t when the bubble burst and market caps adjusted?
Google has a history of beating expectations and succeeding where others have failed. Yet, at their core they are an advertising revenue-driven company that will likely continue to need to acquire software and data center technology expertise. As the cloud vision gets closer will they realize that they were blinded by their own daunting success or will they fulfill Carr’s well-researched prediction?
All of these slices have brought me to recently question Google’s pie in the sky. I think network intelligence and management will have to keep pace with the impacts of increasingly powerful and complex populations of endpoints, in addition to the competing flexibility, security and availability tradeoffs as TCP/IP spreads to new frontiers of connectivity. These demands will require new breakthroughs in network management and automation.
I also think that Google will need to adopt a more aggressive partnership and “best of breed” strategy. Robust cloud apps threaten Microsoft if they can be effectively developed, delivered and secured. Yet maybe Google should partner with specialized application players and deliver an ecosystem of lighter, mission capable applications that would monetize ongoing investments and expertise and supplement it with on the job training. Nicholas Carr might view the cloud’s replacement of enterprise IT as inevitable; yet I think that power plants and data centers have about as much in common as grocery stores and Webvan.
I’m certain that cloud computing is already here and delivering lighter, pre-enterprise applications. I’m also certain that they’ll continue to service low hanging fruit and generalist SMB needs. Yet can Google beat Microsoft to this market? I’m not so certain.
Greg Ness is a senior director at Infoblox. He was formerly at Blue Lane Technologies, Juniper Networks, Redline Networks and ShoreTel. He has been a blogger/columnist at Always On since spring 2004. This blog does not constitute investment advice. For a full disclaimer go to: About ARCHIMEDIUS.