The new issue of Wired has a feature article on Amazon Web Services, the online retailer’s computing utility. The article gives a sense of how rapidly the utility business, and its underlying infrastructure, is expanding. When AWS launched in earnest a couple of years ago, with the S3 storage utility, Amazon’s computer system was at times running at just 10% of its capacity, according to CEO Jeff Bezos. Now, AWS demand has “far exceeded the excess capacity of our internal system,” says AWS head Andy Jassy. AWS is “now big enough to be piling up its own silicon.”
That’s good news for both Amazon and its AWS clients. It implies that the company has probably achieved what traditional utilities call a high “diversity factor” – a broad mix of clients with different and complementary demand patterns – which means, as well, that the system’s capacity utilization is probably quite high and quite steady. (When the Amazon system was only used by the Amazon store, in contrast, its diversity factor and capacity utilization were woefully low – a trait it had in common with most private corporate IT operations.) For any utility, achieving a high capacity utilization, or “load factor,” is crucial to success because it allows the capital invested in infrastructure to be used efficiently. For clients, in turn, that often translates into lower prices.
Nick,
Thanks for the heads-up. You can actually find the article on Wired.com:
wired.com/techbiz/it/magazine/16-05/mf_amazon
Wired usually lets loose their articles a few days before publication. You just have to know where to look : )
mz
Thanks. A link has been added. Nick
That and the $131 Million in revenue from AWS in Q4 are all good signs for the PaaS industry. These are exciting times to be involved in large scale web computing!
“For clients, in turn, that often translates into lower prices.”
Prescient Nick:
http://developer.amazonwebservices.com/connect/ann.jspa?annID=313
Or do you have a mole over at Amazon?
kumulan
Nick,
I haven’t read any of your books but the more I read your posts the more curious I get :-) I mention this as I’m aware I may be missing the background to your comments and so might not get the whole picture.
You say, “When the Amazon system was only used by the Amazon store, in contrast, its diversity factor and capacity utilization were woefully low – a trait it had in common with most private corporate IT operations.” I see your point. If I liken IT to a utility model – the more customers you have with demand coming at different hours, different intervals, different volumes – the smoother your demand (I wonder how you translate power factor to IT?). Ultimately with an infinite number of customers spanning the globe your load will be flat, allowing you to right-size your supply vs. oversize to deal with spikes.
Correct me if I’m wrong but this can be achieved at the enterprise level – by consolidating loads from different applications with varying demands on the same physical box (virtualization of memory, cpus, networks, storage). It gets even better if the loads are global and on the same hardware. Of course if you extend this far enough you”ll achieve the same loads and efficiencies as Amazon. Today’s virtualization technologies are functional/valuable but still immature. Give them time and the efficiencies will improve. All that to stay I think Enterprises have a way to go before they run out of ways to manage capacity/value better.
The downside of all of this, including all the cloud efforts is the underlying complexity. Not only is Google’s hardware investment growing yearly so will the support infrastructure required (people and systems). This leads me to a different point. IT utilities are different from electrical. Electrical are geographically limited while IT is not. You just need to put bigger ‘pipes’ in and the data could be flowing to servers across the globe versus across the city. Clouds can balance load across a geographically distributed infrastructure. This becomes problematic when you consider that more complex systems have a higher tendency for catastrophic failure. What would happen if half the world’s computers shut down at the same time? This will never happen with local computing (except at that location) and is why inefficiency is desirable as it’s required for redundancy. Imagine if that power outage that affected parts of Eastern Canada plus the US East Coast a couple of years ago (due to the system’s complexity) had affected 1/4 of the planet. Hmmm