Posted by: Greg Ness | July 3, 2012

Outages and the Weakonomics of Public Cloud

The latest Amazon outage again begs whether or not Amazon’s customers are technology leaders or laggards simply trying to reduce IT costs to their bare minimum.  Certainly some enterprises are properly using the public cloud for key projects where a non-critical application or service needs to be created quickly and within a low budget.  Yet some companies seem to be using the public cloud for even predictable workloads, with the idea of minimizing costs even to the point of accepting higher risk.  This comment from Michael Lee at ZDNet:

“What this has means, though, is that several companies have looked at their bottom line, and decided that the cost to mitigate the risk isn’t worth maintaining 100 per cent uptime. Bettin said that these organisations tend to be small, and, in order to maintain any sort of profit, they have to be cutthroat with their costs. This is something that the cloud has enabled, but it also puts them at significant risk.”

If public cloud (IaaS) is so inexpensive relative to traditional IT or private cloud , then why would any enterprise take short cuts with the (inexpensive)  redundancy and backup enabled by the public cloud?  Note also that Amazon may have cut a few corners with their own data center infrastructure.  At least one Amazon data center went down because its backup generators didn’t start (from The Hidden Bugs That Made AWS Outage Worse:

“However, it was a variety of unforeseen bugs appearing in Amazon’s software that caused the outage to last so long: for example, one datacentre failed to switch over to its backup generators and eventually the stores of energy in its uninterruptible power supply (UPS) were depleted, shutting down hardware in the region.”

For all of the hoopla about the low cost for public cloud (see the Archimedius perspective at amazon-and-the-enterprise-it-monoculture-myth) the incidence and duration of unplanned downtime seems to suggest that the economics of public cloud may at times be less attractive than the marketing portrays; or at least the infrastructure being marketed as robust may not be as resilient as marketed, unless the customer pays extra.

This again goes to the 500kW threshold discussed previously… or, in other words, the business case for public cloud for predictable workloads may be so weak that it in some cases may require shortcuts to the detriment of availability.

Dr. Zen Kishimoto talks about the 500kW threshold theory when reviewing the recent Fire Panel on Cloud Computing:

 “At this point, Greg interjected an interesting fact drawn from his discussions with area experts. When your power consumption is less than 500 kW, it makes sense to outsource your computing to a public cloud. But if your consumption goes over 500 kW, it becomes more economical to have your own infrastructures (i.e., private clouds) so that you can tune them for your computing needs.”

Zen also talked about the emergence of power costs as a strategic concern for IT, partially driven by the influence of public cloud operating models:

“To do this massive task, for the past 25 to 30 years, 80–85% of the cost has been dedicated to software and those who manage software. Cloud changes this. For example, companies like eBay and PayPal have only a few patterns, which cuts the cost of maintaining them. When you do this, your opex primarily becomes energy, and the issue becomes where you run your loads and how you optimize the power/infrastructure ratio.”

At 500kW power becomes a material (if not strategic) concern, which makes the data center electrical and mechanical design a material concern.  An analyst recently told me that at least one cloud provider was leveraging data centers built before 2007.  The question then becomes: how much are the customers paying for power, the largest variable cost for larger environments?  If that rate is too high the threshold for moving to a private cloud in an advanced data center could be as low as 300kW.

If you don’t have 30 minutes to view the FIRE cloud panel video I suggest Zen’s blog for a succinct, quick review of the key points.

Recent related blogs:  Private Clouds and the 500kW Threshold,  The Cloud and the Great Data Center Race, Amazon and the Enterprise IT Monoculture Myth, The Decline of the Public Cloud, and What Every CIO Should Know about Cloud Computing.


Responses

  1. [...] Outages and the Weakonomics of Public Cloud – Thoughts on cloud deployments from Greg Ness at Archimedius: “The latest Amazon outage again begs whether or not Amazon’s customers are technology leaders or laggards simply trying to reduce IT costs to their bare minimum. Certainly some enterprises are properly using the public cloud for key projects where a non-critical application or service needs to be created quickly and within a low budget. Yet some companies seem to be using the public cloud for even predictable workloads, with the idea of minimizing costs even to the point of accepting higher risk.” [...]

  2. My have times changed. Check out my latest blog on AWS and the IT transformation now underway: http://gregness.wordpress.com/2013/11/27/the-power-of-cloud-transformation/


Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Categories

Follow

Get every new post delivered to your Inbox.

Join 35 other followers

%d bloggers like this: