For any utility, profitability hinges on using your capital assets – your installed capacity – as efficiently as possible, and the way you do that is through sophisticated pricing schedules. In essence, you want to reward those customers whose usage patterns allow you to use your installed capacity efficiently (by cutting their prices) while penalizing those customers whose usage patterns undermine your ability to use your installed capacity efficiently (by raising their prices). If you do this effectively, you get the best possible return on every dollar of capital you invest in infrastructure and, as you grow, you get more profitable. If you do it poorly, you become less profitable as you grow and, ultimately, you croak.
By all accounts, Amazon Web Services’ path-breaking computing utilities, particularly its S3 storage utility, are fabulously popular. But they don’t yet seem to be profitable, and as Amazon CEO Jeff Bezos recently disclosed, they are now capacity-constrained. In other words, to continue to grow Amazon is having to expand its installed capacity through investments in data centers, drives, processors, bandwidth, and other plant and equipment. In order to become profitable as it makes those capital investments, it has to begin to more aggressively shape the way its services are used by customers. It can no longer treat all customers as equals.
In this light, Amazon’s original flat-rate pricing for its utility services, while having the advantage of simplicity, becomes unsustainable. Electric utilities, to take an earlier example, started off with flat-rate pricing, but they only became hugely successful when they began to customize their pricing schedules to the usage patterns of individual customers. So it’s no surprise that Amazon has announced that it will abandon its flat-rate pricing schedule for S3 on June 1 and introduce a more complex pricing schedule with tiered fees for bandwidth usage and a new fee for the number of requests made on the system. (Storage fees themselves will remain fixed – for the time being.) Amazon puts a customer-friendly spin on the change:
With Amazon S3 recently celebrating its one year birthday, we took an in-depth look at how developers were using the service, and explored whether there were opportunities to further lower costs for our customers. The primary area our customers had asked us to investigate was whether we could charge less for bandwidth.
There are two primary costs associated with uploading and downloading files: the cost of the bandwidth itself, and the fixed cost of processing a request. Consistent with our cost-following pricing philosophy, we determined that the best solution for our customers, overall, is to equitably charge for the resources being used – and therefore disaggregate request costs from bandwidth costs.
Making this change will allow us to offer lower bandwidth rates for all of our customers. In addition, we’re implementing volume pricing for bandwidth, so that as our customers’ businesses grow and help us achieve further economies of scale, they benefit by receiving even lower bandwidth rates. Finally, this means that we will be introducing a small request-based charge for each time a request is made to the service.
The end result is an overall price reduction for the vast majority of our customers. If this new pricing had been applied to customers’ March 2007 usage, 75% of customers would have seen their bill decrease, while an additional 11% would have seen an increase of less than 10%. Only 14% of customers would have experienced an increase of greater than 10%.
That’s all well and good, but the real reason for the pricing change is to shift usage patterns to Amazon’s benefit. It’s true, though, that ultimately the shift in usage patterns will generate broad benefits to customers, because the more efficiently Amazon uses its installed capacity, the lower it will be able to push its prices. (When a utility is in its early growth phase, it makes more money by cutting prices than by raising them – if it’s well managed.)
In the immediate term, though, some users, particularly those using S3 to store large quantities of fairly large files, will make out well, while other users, particularly those using the service as a web server for lots of small files, will get a whack to the wallet. You can see the contrast by reading, on the one hand, the response of SmugMug CEO Don MacAskill (a beneficiary) and, on the other, the posts on Amazon Web Services’ developer site from users who will suffer.
The big picture is simple: Amazon’s success in utility computing is forcing it to become more sophisticated as a utility operator and that inevitably means drawing distinctions between users and reflecting those distinctions in variable pricing. S3 may only be a year old, but it’s already growing up.
It’s interesting (to me, at least) that Amazon talks about percentages of customers rather than percentage of service utilization. I wonder who are that unfortunate 14% of customers who see a more than 10% price jump, and what amount of S3 capacity they consume. My point here is that while Amazon and similar ventures are doing a text-book simple search of the business model space, it is awefully hard for a customer to build something large on top of that, given the uncertainties. And, where does that leave the providers?
Amazon et al. are in the tough spot of simultaneously trying to offer a platform and build-out huge plants that manufacture that platform. If it is discovered, after a little while, that the platform is not quite right — do we wind up with a rotten foundation? or darkened plants whose hardware sunk costs aren’t right? or some mix? And if the providers thrash the customers too much, will the whole market turn sour?
The idea of computing-as-utility is about as old as commercial computing itself (you might remember “timesharing”). The idea was strongly revived among the research and business community in the 1980s, giving us experiments like the “Blit terminal,” products like “X terminals” and polemics from the likes of Larry Ellison.
What seems to have changed recently is only that a few big players have more cash than they know what to do with and not enough products in the pipeline, in a climate that mostly, more-or-less advises hedging in real estate — so they are more inclined to place the bet that Oracle never quite needed to.
And that leads to what I fear we might be seeing. Sure, it’s a no brainer that computing-as-utility lies in the future. The problem is an apparent building out of plants that is way, way ahead not only of demand, but even way, way ahead of an established platform! Perhaps it is a bit as if Henry Ford, having gleaned the gist of where the automobile industry would go, rather than cranking out model T’s, tried to leapfrog directly to a vast 21st century system of robotic plants and j.i.t. manufacturing (all finely tuned to make model T’s, of course).
-t
many outsourcing clients of EDS, HP etc are paying $ 3-6 per gb per month. Sure, they get much better disaster recovery, and more customized corporate service, but I would suggest their customers should be complaining not amazon’s at 15c…
Does anyone have data on whether Amazon’s S3 offers reliable performance? The main reason I wonder is that Amazon.com itself (in America) is consistently one of the slowest-responding sites I visit. I use a fancy web browser (iRider) to queue several pages for download, and it can be astonishing how slow Amazon is compared to virtually any other highly-trafficked site.
One wonders whether this is a computational bottleneck (database or other code execution) or just a bandwidth one.
Really nice post Nick, it is fascinating to watch Amazon right now as they build on their success with S3/EC2. One other point that I believe may be responsible for the prices change (not unlike your summary) is that historical usage of their data centers and facilities had been shaped by their own and their factored e-commerce platform.
With the advent of EC2/S3 the whole profile of traffic/load has changed, most of the big users have more diverse usage patterns. As EC2/S3 becomes a greater part of their data center load they have to try to encourage traffic shaping to make it more cost effective moving forward, hence the adjustments of their pricing model. They also could not have foreseen that based on their prior traffic modeling.
regards
Al