For Amazon.com’s utility-computing operation, Amazon Web Services, 2009 will be a crucial year, as the company is looking to expand beyond its traditional customer base of web developers and other relatively small-scale operators and push its services into the heart of the enterprise market. AWS is hoping to capitalize on the current economic climate, where cash, suddenly, is in short supply, to convince larger companies to begin shifting their computing requirements out of their own data centers and into the cloud, transforming IT from a capital expense to a pay-as-you-go operating expense. Amazon CTO Werner Vogels makes the pitch explicitly in a post on his blog today:
These are times where many companies are focusing on the basics of their IT operations and are asking themselves how they can operate more efficiently to make sure that every dollar is spent wisely. This is not the first time that we have gone through this cycle, but this time there are tools available to CIOs and CTOs that help them to manage their IT budgets very differently. By using infrastructure as a service, basic IT costs are moved from a capital expense to a variable cost, building clearer relationships between expenditures and revenue generating activities. CFOs are especially excited about the premise of this shift.
Beyond the marketing push, Amazon is rushing to make its services “enterprise-ready” at a technical level. It has announced today that its computing service, Elastic Compute Cloud, or EC2, is officially out of beta and operating “in full production” (whatever that means). It is also now offering a service-level agreement, or SLA, for EC2, guaranteeing that the service will be available 99.95% of the time. And, as previously announced, EC2 now supports virtual machines running Windows as well as Linux.
Equally important, Amazon has announced plans to beef up AWS’s management controls during the coming year, an essential step if it’s to entice big companies to begin shifting mainstream applications into Amazon’s cloud. It says it will offer four new or expanded capabilities in this regard:
Management Console – The management console will simplify the process of configuring and operating your applications in the AWS cloud. You’ll be able to get a global picture of your cloud computing environment using a point-and-click web interface.
Load Balancing – The load balancing service will allow you to balance incoming requests and traffic across multiple EC2 instances.
Automatic Scaling – The auto-scaling service will allow you to grow and shrink your usage of EC2 capacity on demand based on application requirements.
Cloud Monitoring – The cloud monitoring service will provide real time, multi-dimensional monitoring of host resources across any number of EC2 instances, with the ability to aggregate operational metrics across instances, Availability Zones, and time slots.
Amazon is not the only company that sees a big opportunity to expand the reach of cloud computing in the coming months. Yesterday, the hosting giant Rackspace announced a big expansion of its cloud computing portfolio, acquiring two cloud providers, Slicehost, a seller of virtual computing capacity, and Jungle Disk, which offers web-based storage, and next week Microsoft is expected to announce an expanded set of cloud-computing services that will compete directly with Amazon’s.
While it’s true that the economic downturn will provide greater incentives for companies to consider cloud services as a means of reducing or avoiding capital expenditures, that’s not the whole story. Companies also tend to become more risk-averse when the economy turns bad, and that may put a brake on their willingness to experiment with cloud services. Amazon’s moves today – ditching the beta label, offering service guarantees, promising more precise management controls, speaking to the CFO as well as the CIO – are intended not only to promote the economic advantages of cloud computing but to make the cloud feel “safer” to big companies. Whether it will succeed or not remains to be seen.
And the really interesting thing that’s slipped in is that Windows is licensed on a per machine hour basis. That’s going to leaving the playing field in favour of Windows by taking away the sticker-shock of the licenses. Startups who have a 40%APR+ cost of capital will like this!
99.95% crudely equals 4 1/2 hours of downtime a year. Which is okay for most commercial applications. Presumably clever use of availability zones would increase this. Assuming that the SLA is meaningful…
There’s really not much advantage to hosting the desktop on a remote server if you still have to use a fat client (Windows OS Remote Desktop) to access it. What would really be the bomb would be to trash everything above the transport level(TCP/IP) and come up with a presentation level protocol that could be used to render the GUI and send back user events(mouse click, key presses or even USB traffic) to the server making the desktop essentially just another cable channel. Does this sound like interactive television? One day, instead of the newest MacBook, could we be like Guy Montag and saving our money for the fourth wall TV?
“Companies also tend to become more risk-averse when the economy turns bad” Oh really? Not so much for publically traded companies with real world auditors. As much as we would love to embrace a multi-cloud environment and lighten our huge IT infrastructure load, until we see the SAS70 certification the 99.95 sla doesn’t mean too much. Ever try to do backup and recovery from one cloud to another? eDiscovery? Clouds, thin clients, etc. are clearly emerging as mainstream and part of our IT vision, but IMHO 2010 wil be the year that IT doesn’t matter.
SAS70’s more of an access control thing, you could be hopelessly riddled with un-patched s/ware and still (in theory?) pass. But, yeah, “I loves the Amazon, look at the friendly logo”, just isn’t what you want to tell an auditor!
I think Amazon are bring in a new EU node for EC2, but of course that won’t matter much for compliance if it’s wired back to Amazon Central Control in Seattle.
Perhaps 3Tera’s model is better in dealing with this. Then you’ve got a variety of cloud hosts with varying levels of SLAs and controls.
Interesting question: What’s more important, diversification of hosting organisation, or of hosting infrastructure? And when?
I have just had the following conversation @ Minibar in London:
Me: What about small financial firms that have compliance issues?
Hosting Company Person: Oh, those guy are f*cked! It’s just too expensive to cater to. Everyone’s stuck doing their own hosting for that.
Honest at least!
(MS RDP is a pretty good light-weight remote UI protocol. I’ve played CDs over it from my work PC bounced through a VPN to another office in the midlands.)